Emulates Node’s zlib module for the browser. Can be used as a drop in replacement with Browserify and webpack.
The heavy lifting is done using pako. The code in this module is modeled closely after the code in the source of Node core to get as much compatability as possible.
https://nodejs.org/api/zlib.html
The following options/methods are not supported because pako does not support them yet.
params methodThis is a utility, which allows a function to figure out from which file it was invoked. It does so by inspecting v8’s stack trace at the time it is invoked.
Inspired by http://stackoverflow.com/questions/13227489
note: this relies on Node/V8 specific APIs, as such other runtimes may not work
Given:
// ./foo.js
const getCallerFile = require('get-caller-file');
module.exports = function() {
return getCallerFile(); // figures out who called it
};getCallerFile(position = 2): where position is stack frame whos fileName we want.Node style HMACs for use in the browser, with native HMAC functions in node. API is the same as HMACs in node:
var createHmac = require('create-hmac')
var hmac = createHmac('sha224', Buffer.from('secret key'))
hmac.update('synchronous write') //optional encoding parameter
hmac.digest() // synchronously get result with optional encoding parameter
hmac.write('write to it as a stream')
hmac.end() //remember it's a stream
hmac.read() //only if you ended it as a stream thoughDetect whether a value is an error
var isError = require("is-error");
console.log(isError(new Error('hi'))) // true
console.log(isError({ message: 'hi' })) // falsevar bool = isError(maybeErr)isError returns a boolean. it will detect whether the argument is an error or not.
npm install is-error
npm test
Constant-time Buffer comparison for node.js. Should work with browserify too.
var bufferEq = require('buffer-equal-constant-time');
var a = new Buffer('asdf');
var b = new Buffer('asdf');
if (bufferEq(a,b)) {
// the same!
} else {
// different in at least one byte!
}If you’d like to install an .equal() method onto the node.js Buffer and SlowBuffer prototypes:
require('buffer-equal-constant-time').install();
var a = new Buffer('asdf');
var b = new Buffer('asdf');
if (a.equal(b)) {
// the same!
} else {
// different in at least one byte!
}To get rid of the installed .equal() method, call .restore():
The Lodash library exported as Node.js modules.
Using npm:
npm i -g npm
npm i --save lodash
In Node.js:
// Load the full build.
var _ = require('lodash');
// Load the core build.
var _ = require('lodash/core');
// Load the FP build for immutable auto-curried iteratee-first data-last methods.
var fp = require('lodash/fp');
// Load method categories.
var array = require('lodash/array');
var object = require('lodash/fp/object');
// Cherry-pick methods for smaller browserify/rollup/webpack bundles.
var at = require('lodash/at');
var curryN = require('lodash/fp/curryN');See the package source for more details.
Note:
Install n_ for Lodash use in the Node.js < 6 REPL.
Tested in Chrome 74-75, Firefox 66-67, IE 11, Edge 18, Safari 11-12, & Node.js 8-12.
Automated browser & CI test runs are available.
Node style md5 on pure JavaScript.
From NIST SP 800-131A: md5 is no longer acceptable where collision resistance is required such as digital signatures.
var MD5 = require('md5.js')
console.log(new MD5().update('42').digest('hex'))
// => a1d0c6e83f027327d8461063f4ac58a6
var md5stream = new MD5()
md5stream.end('42')
console.log(md5stream.read().toString('hex'))
// => a1d0c6e83f027327d8461063f4ac58a6base64-js does basic base64 encoding/decoding in pure JS.
Many browsers already have base64 encoding/decoding functionality, but it is for text data, not all-purpose binary data.
Sometimes encoding/decoding binary data in the browser is useful, and that is what this module does.
With npm do:
npm install base64-js and var base64js = require('base64-js')
For use in web browsers do:
<script src="base64js.min.js"></script>
Get supported base64-js with the Tidelift Subscription
base64js has three exposed functions, byteLength, toByteArray and fromByteArray, which both take a single argument.
byteLength - Takes a base64 string and returns length of byte arraytoByteArray - Takes a base64 string and returns a byte arrayfromByteArray - Takes a byte array and returns a base64 stringThis is a simplified import of the excellent diff-match-patch library by Neil Fraser into the Node.js environment. The match and patch parts are removed, as well as all the extra diff options. What remains is incredibly fast diffing between two strings.
The diff function is an implementation of “An O(ND) Difference Algorithm and its Variations” (Myers, 1986) with the suggested divide and conquer strategy along with several optimizations Neil added.
var diff = require('fast-diff');
var good = 'Good dog';
var bad = 'Bad dog';
var result = diff(good, bad);
// [[-1, "Goo"], [1, "Ba"], [0, "d dog"]]
// Respect suggested edit location (cursor position), added in v1.1
diff('aaa', 'aaaa', 1)
// [[0, "a"], [1, "a"], [0, "aa"]]
// For convenience
diff.INSERT === 1;
diff.EQUAL === 0;
diff.DELETE === -1;Find documentation url for a given ESLint rule. Updated daily!
const getRuleUrl = require('eslint-rule-docs');
// Find url for core rules
getRuleUrl('no-undef');
// => { exactMatch: true, url: 'https://eslint.org/docs/rules/no-undef' }
// Find url for known plugins
getRuleUrl('react/sort-prop-types');
// => { exactMatch: true, url: 'https://github.com/yannickcr/eslint-plugin-react/blob/master/docs/rules/sort-prop-types.md' }
// If the plugin has no documentation, return repository url
getRuleUrl('flowtype/semi');
// => { exactMatch: false, url: 'https://github.com/gajus/eslint-plugin-flowtype' }
// If the plugin is unknown, returns an empty object
getRuleUrl('unknown-foo/bar');
// => {}Uses Buffer to emulate the exact functionality of the browser’s atob.
Note: Unicode may be handled incorrectly (like the browser).
It turns base64-encoded ascii data back to binary.
(function () {
"use strict";
var atob = require('atob');
var b64 = "SGVsbG8sIFdvcmxkIQ==";
var bin = atob(b64);
console.log(bin); // "Hello, World!"
}());Check out unibabel.js
Docs released under Creative Commons.
This library is incredibly useful when working with HTTP headers. It allows you to get/set/check for headers in a caseless manner while also preserving the caseing of headers the first time they are set.
Has takes a name and if it finds a matching header will return that header name with the preserved caseing it was set with.
Set is fairly straight forward except that if the header exists and clobber is disabled it will add ','+value to the existing header.
Swaps the casing of a header with the new one that is passed in.
var headers = {}
, c = caseless(headers)
;
c.set('a-Header', 'fdas')
c.swap('a-HEADER')
c.has('a-header') === 'a-HEADER'
headers === {'a-HEADER': 'fdas'}A curated list of browser globals that commonly cause confusion and are not recommended to use without an explicit window. qualifier.
Some global variables in browser are likely to be used by people without the intent of using them as globals, such as status, name, event, etc.
For example:
handleClick() { // missing `event` argument
this.setState({
text: event.target.value // uses the `event` global: oops!
});
}This package exports a list of globals that are often used by mistake. You can feed this list to a static analysis tool like ESLint to prevent their usage without an explicit window. qualifier.
If you use Create React App, you don’t need to configure anything, as this rule is already included in the default eslint-config-react-app preset.
If you maintain your own ESLint configuration, you can do this:
var restrictedGlobals = require('confusing-browser-globals');
module.exports = {
rules: {
'no-restricted-globals': ['error'].concat(restrictedGlobals),
},
};Escape any string to be a valid JavaScript string literal between double quotes or single quotes.
npm install js-string-escape
If you need to generate JavaScript output, this library will help you safely put arbitrary data in JavaScript strings:
jsStringEscape = require('js-string-escape')
console.log('"' + jsStringEscape('Quotes (\", \'), newlines (\n), etc.') + '"')
// => "Quotes (\", \'), newlines (\n), etc."In other words, given any string s, the following invariants hold:
These eval expressions are safe with untrusted strings s.
Non-strings will be cast to strings.
This library has been checked against ECMAScript 5.1 and tested against all Unicode code points.
Adds support for the timers module to browserify.
The public methods of the timers module are:
setTimeout(callback, delay, [arg], [...])clearTimeout(timeoutId)setInterval(callback, delay, [arg], [...])clearInterval(intervalId)and indeed, browsers support these already.
The timers module also includes some private methods used in other built-in Node.js modules:
enroll(item, delay)unenroll(item)active(item)These are used to efficiently support a large quantity of timers with the same timeouts by creating only a few timers under the covers.
Node.js also offers the immediate APIs, which aren’t yet available cross-browser, so we polyfill those:
setImmediate(callback, [arg], [...])clearImmediate(immediateId)Linked lists are efficient when you have thousands (millions?) of timers with the same delay. Take a look at timers-browserify-full in this case.
node-asn1 is a library for encoding and decoding ASN.1 datatypes in pure JS. Currently BER encoding is supported; at some point I’ll likely have to do DER.
Mostly, if you’re actually needing to read and write ASN.1, you probably don’t need this readme to explain what and why. If you have no idea what ASN.1 is, see this: ftp://ftp.rsa.com/pub/pkcs/ascii/layman.asc
The source is pretty much self-explanatory, and has read/write methods for the common types out there.
The following reads an ASN.1 sequence with a boolean.
var Ber = require('asn1').Ber;
var reader = new Ber.Reader(Buffer.from([0x30, 0x03, 0x01, 0x01, 0xff]));
reader.readSequence();
console.log('Sequence len: ' + reader.length);
if (reader.peek() === Ber.Boolean)
console.log(reader.readBoolean());
The following generates the same payload as above.
var Ber = require('asn1').Ber;
var writer = new Ber.Writer();
writer.startSequence();
writer.writeBoolean(true);
writer.endSequence();
console.log(writer.buffer);
npm install asn1
See https://github.com/joyent/node-asn1/issues.
Merge objects using descriptors.
var thing = {
get name() {
return 'jon'
}
}
var animal = {
}
merge(animal, thing)
animal.name === 'jon'Redefines destination’s descriptors with source’s.
Defines source’s descriptors on destination if destination does not have a descriptor by the same name.
From version 2.0 of the SPDX specification:
The Linux Foundation and the SPDX working groups are good people. Only they decide what “SPDX” means, as a standard and otherwise. I respect their work and their rights. You should, too.
I created this package by copying exception identifiers out of the SPDX specification. That work was mechanical, routine, and required no creativity whatsoever. - Kyle Mitchell, package author
United States users concerned about intellectual property may wish to discuss the following Supreme Court decisions with their attorneys:
Flatten an array of nested arrays into a single flat array. Accepts an optional depth.
npm install array-flatten --save
var flatten = require('array-flatten')
flatten([1, [2, [3, [4, [5], 6], 7], 8], 9])
//=> [1, 2, 3, 4, 5, 6, 7, 8, 9]
flatten([1, [2, [3, [4, [5], 6], 7], 8], 9], 2)
//=> [1, 2, 3, [4, [5], 6], 7, 8, 9]
(function () {
flatten(arguments) //=> [1, 2, 3]
})(1, [2, 3])Unpipe a stream from all destinations.
Unpipes all destinations from a given stream. With stream 2+, this is equivalent to stream.unpipe(). When used with streams 1 style streams (typically Node.js 0.8 and below), this module attempts to undo the actions done in stream.pipe(dest).
Like JSON.stringify, but doesn’t throw on circular references.
Takes the same arguments as JSON.stringify.
var stringify = require('json-stringify-safe');
var circularObj = {};
circularObj.circularRef = circularObj;
circularObj.list = [ circularObj, circularObj ];
console.log(stringify(circularObj, null, 2));Output:
stringify(obj, serializer, indent, decycler)
The first three arguments are the same as to JSON.stringify. The last is an argument that’s only used when the object has been seen already.
The default decycler function returns the string '[Circular]'. If, for example, you pass in function(k,v){} (return nothing) then it will prune cycles. If you pass in function(k,v){ return {foo: 'bar'}}, then cyclical objects will always be represented as {"foo":"bar"} in the result.
stringify.getSerialize(serializer, decycler)
Returns a serializer that can be used elsewhere. This is the actual function that’s passed to JSON.stringify.
Note that the function returned from getSerialize is stateful for now, so do not use it more than once.
Recursive object extending.
var deepExtend = require('deep-extend');
var obj1 = {
a: 1,
b: 2,
d: {
a: 1,
b: [],
c: { test1: 123, test2: 321 }
},
f: 5,
g: 123,
i: 321,
j: [1, 2]
};
var obj2 = {
b: 3,
c: 5,
d: {
b: { first: 'one', second: 'two' },
c: { test2: 222 }
},
e: { one: 1, two: 2 },
f: [],
g: (void 0),
h: /abc/g,
i: null,
j: [3, 4]
};
deepExtend(obj1, obj2);
console.log(obj1);
/*
{ a: 1,
b: 3,
d:
{ a: 1,
b: { first: 'one', second: 'two' },
c: { test1: 123, test2: 222 } },
f: [],
g: undefined,
c: 5,
e: { one: 1, two: 2 },
h: /abc/g,
i: null,
j: [3, 4] }
*/Please, report about issues here.
Returns true if the current environment is a Continuous Integration server.
Please open an issue if your CI server isn’t properly detected :)
For CLI usage you need to have the is-ci executable in your PATH. There’s a few ways to do that:
npm install is-ci -g./node_modules/.bin/is-ciRefer to ci-info docs for all supported CI’s
Merges the properties from a source object into a destination object.
Default Node-style module resolution plugin for eslint-plugin-import.
Published separately to allow pegging to a specific version in case of breaking changes.
Config is passed directly through to resolve as options:
settings:
import/resolver:
node:
extensions:
# if unset, default is just '.js', but it must be re-added explicitly if set
- .js
- .jsx
- .es6
- .coffee
paths:
# an array of absolute paths which will also be searched
# think NODE_PATH
- /usr/local/share/global_modules
# this is technically for identifying `node_modules` alternate names
moduleDirectory:
- node_modules # defaults to 'node_modules', but...
- bower_components
- project/src # can add a path segment here that will act like
# a source root, for in-project aliasing (i.e.
# `import MyStore from 'stores/my-store'`)or to use the default options:
Is this specifier a node.js core module? Optionally provide a node version to check; defaults to the current node version.
var isCore = require('is-core-module');
var assert = require('assert');
assert(isCore('fs'));
assert(!isCore('butts'));Clone the repo, npm install, and run npm test
When you want to fire an event no matter how a process exits:
process.exit(code) called.process.kill(pid, sig) called.Use signal-exit.
var onExit = require('signal-exit')
onExit(function (code, signal) {
console.log('process exited!')
})var remove = onExit(function (code, signal) {}, options)
The return value of the function is a function that will remove the handler.
Note that the function only fires for signals if the signal would cause the proces to exit. That is, there are no other listeners, and it is a fatal signal.
alwaysLast: Run this handler after any other signal or exit handlers. This causes process.emit to be monkeypatched.Like the unix which utility.
Finds the first instance of a specified executable in the PATH environment variable. Does not cache the results, so hash -r is not needed when the PATH changes.
var which = require('which')
// async usage
which('node', function (er, resolvedPath) {
// er is returned if no "node" is found on the PATH
// if it is found, then the absolute path to the exec is returned
})
// or promise
which('node').then(resolvedPath => { ... }).catch(er => { ... not found ... })
// sync usage
// throws if not found
var resolved = which.sync('node')
// if nothrow option is used, returns null if not found
resolved = which.sync('node', {nothrow: true})
// Pass options to override the PATH and PATHEXT environment vars.
which('node', { path: someOtherPath }, function (er, resolved) {
if (er)
throw er
console.log('found at %j', resolved)
})Same as the BSD which(1) binary.
usage: which [-as] program ...
You may pass an options object as the second argument.
path: Use instead of the PATH environment variable.pathExt: Use instead of the PATHEXT environment variable.all: Return all matches, instead of just the first one. Note that this means the function returns an array of strings instead of a single string.Stripped down version of s[n]printf(3c). We make a best effort to throw an exception when given a format string we don’t understand, rather than ignoring it, so that we won’t break existing programs if/when we go implement the rest of this.
This implementation currently supports specifying
Everything else is currently unsupported, most notably: precision, unsigned numbers, non-decimal numbers, and characters.
Besides the usual POSIX conversions, this implementation supports:
%j: pretty-print a JSON object (using node’s “inspect”)%r: pretty-print an Error objectFirst, install it:
# npm install extsprintf
Now, use it:
var mod_extsprintf = require('extsprintf');
console.log(mod_extsprintf.sprintf('hello %25s', 'world'));
outputs:
hello world
printf: same args as sprintf, but prints the result to stdout
fprintf: same args as sprintf, preceded by a Node stream. Prints the result to the given stream.
require('process'); just like any other module.
Works in node.js and browsers via the browser.js shim provided with the module.
The goal of this module is not to be a full-fledged alternative to the builtin process module. This module mostly exists to provide the nextTick functionality and little more. We keep this module lean because it will often be included by default by tools like browserify when it detects a module has used the process global.
It also exposes a “browser” member (i.e. process.browser) which is true in this implementation but undefined in node. This can be used in isomorphic code that adjusts it’s behavior depending on which environment it’s running in.
If you are looking to provide other process methods, I suggest you monkey patch them onto the process global in your app. A list of user created patches is below.
If you are writing a bundler to package modules for client side use, make sure you use the browser field hint in package.json.
See https://gist.github.com/4339901 for details.
The browserify module will properly handle this field when bundling your files.
Computes the longest prefix string that is common to each path, excluding the base component. Tested with Node.js 8 and above.
npm install common-path-prefix
The module has one default export, the commonPathPrefix function:
Call commonPathPrefix() with an array of paths (strings) and an optional separator character:
const paths = ['templates/main.handlebars', 'templates/_partial.handlebars']
commonPathPrefix(paths, '/') // returns 'templates/'If the separator is not provided the first / or \ found in any of the paths is used. Otherwise the platform-default value is used:
commonPathPrefix(['templates/main.handlebars', 'templates/_partial.handlebars']) // returns 'templates/'
commonPathPrefix(['templates\\main.handlebars', 'templates\\_partial.handlebars']) // returns 'templates\\'You can provide any separator, for example:
An empty string is returned if no common prefix exists:
Note that the following does have a common prefix:
Minimal module to check if a file is executable, and a normal file.
Uses fs.stat and tests against the PATHEXT environment variable on Windows.
var isexe = require('isexe')
isexe('some-file-name', function (err, isExe) {
if (err) {
console.error('probably file does not exist or something', err)
} else if (isExe) {
console.error('this thing can be run')
} else {
console.error('cannot be run')
}
})
// same thing but synchronous, throws errors
var isExe = isexe.sync('some-file-name')
// treat errors as just "not executable"
isexe('maybe-missing-file', { ignoreErrors: true }, callback)
var isExe = isexe.sync('maybe-missing-file', { ignoreErrors: true })isexe(path, [options], [callback])Check if the path is executable. If no callback provided, and a global Promise object is available, then a Promise will be returned.
Will raise whatever errors may be raised by fs.stat, unless options.ignoreErrors is set to true.
isexe.sync(path, [options])Same as isexe but returns the value and throws any errors raised.
ignoreErrors Treat all errors as “no, this is not executable”, but don’t raise them.uid Number to use as the user idgid Number to use as the group idpathExt List of path extensions to use instead of PATHEXT environment variable on Windows.Is this an ES6 Symbol value?
var isSymbol = require('is-symbol');
assert(!isSymbol(function () {}));
assert(!isSymbol(null));
assert(!isSymbol(function* () { yield 42; return Infinity; });
assert(isSymbol(Symbol.iterator));
assert(isSymbol(Symbol('foo')));
assert(isSymbol(Symbol.for('foo')));
assert(isSymbol(Object(Symbol('foo'))));Simply clone the repo, npm install, and run npm test
Get CI environment variables for parallelizing builds
yarn add ci-parallel-vars
const ciParallelVars = require('ci-parallel-vars');
console.log(ciParallelVars); // { index: 3, total: 10 } || nullIf you want to add support for another pair, please open a pull request and add them to
index.jsand to this list.
CI_NODE_INDEX/CI_NODE_TOTALCIRCLE_NODE_INDEX/CIRCLE_NODE_TOTALBITBUCKET_PARALLEL_STEP/BITBUCKET_PARALLEL_STEP_COUNTBUILDKITE_PARALLEL_JOB/BUILDKITE_PARALLEL_JOB_COUNTSEMAPHORE_CURRENT_JOB/SEMAPHORE_JOB_COUNTOne of these pairs must both be defined as numbers or ci-parallel-vars will be null.
Implementation of function.prototype.bind
I mainly do this for unit tests I run on phantomjs. PhantomJS does not have Function.prototype.bind :(
npm install function-bind
Some special networking features can optionally use a Flash component. Building the output SWF file requires the Flex SDK. A pre-built component is included: swf/SocketPool.swf.
Building the output SWF requires the mxmlc tool from the Flex SDK. If that tools is already installed then look in the package.json file for the commands to rebuild it. If you need the SDK installed, there is a npm module that installs it:
npm install
To build a regular component:
npm run build
Additional debug support can be built in with the following:
npm run build-debug
Flash support requires the use of a Policy Server.
mod_fsp provides an Apache module that can serve up a Flash Socket Policy. See mod_fsp/README for more details. This module makes it easy to modify an Apache server to allow cross domain requests to be made to it.
policyserver.py provides a very simple test policy server.
policyserver.js provides a very simple test policy server. If a server is needed for production environments, please use another option such as perhaps nodejs_socket_policy_server.
ESLint Scope is the ECMAScript scope analyzer used in ESLint. It is a fork of escope.
Install:
npm i eslint-scope --save
Example:
var eslintScope = require('eslint-scope');
var espree = require('espree');
var estraverse = require('estraverse');
var ast = espree.parse(code);
var scopeManager = eslintScope.analyze(ast);
var currentScope = scopeManager.acquire(ast); // global scope
estraverse.traverse(ast, {
enter: function(node, parent) {
// do stuff
if (/Function/.test(node.type)) {
currentScope = scopeManager.acquire(node); // get current function scope
}
},
leave: function(node, parent) {
if (/Function/.test(node.type)) {
currentScope = currentScope.upper; // set to parent scope
}
// do stuff
}
});Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the ESLint Contributor Guidelines, so please be sure to read them before contributing. If you’re not sure where to dig in, check out the issues.
npm test - run all linting and testsnpm run lint - run all lintingJust a bike-shed.
This package provides utility functions and classes for make ESLint custom rules.
For examples:
See documentation.
See releases.
Welcome contributing!
Please use GitHub’s Issues/PRs.
npm test runs tests and measures coverage.npm run clean removes the coverage result of npm test command.npm run coverage shows the coverage result of the last npm test command.npm run lint runs ESLint.npm run watch runs tests on each file change.node-http-signature is a node.js library that has client and server components for Joyent’s HTTP Signature Scheme.
Note the example below signs a request with the same key/cert used to start an HTTP server. This is almost certainly not what you actually want, but is just used to illustrate the API calls; you will need to provide your own key management in addition to this library.
var fs = require('fs');
var https = require('https');
var httpSignature = require('http-signature');
var key = fs.readFileSync('./key.pem', 'ascii');
var options = {
host: 'localhost',
port: 8443,
path: '/',
method: 'GET',
headers: {}
};
// Adds a 'Date' header in, signs it, and adds the
// 'Authorization' header in.
var req = https.request(options, function(res) {
console.log(res.statusCode);
});
httpSignature.sign(req, {
key: key,
keyId: './cert.pem'
});
req.end();var fs = require('fs');
var https = require('https');
var httpSignature = require('http-signature');
var options = {
key: fs.readFileSync('./key.pem'),
cert: fs.readFileSync('./cert.pem')
};
https.createServer(options, function (req, res) {
var rc = 200;
var parsed = httpSignature.parseRequest(req);
var pub = fs.readFileSync(parsed.keyId, 'ascii');
if (!httpSignature.verifySignature(parsed, pub))
rc = 401;
res.writeHead(rc);
res.end();
}).listen(8443);npm install http-signature
See https://github.com/joyent/node-http-signature/issues.
This library provides the functionality of PBKDF2 with the ability to use any supported hashing algorithm returned from crypto.getHashes()
var pbkdf2 = require('pbkdf2')
var derivedKey = pbkdf2.pbkdf2Sync('password', 'salt', 1, 32, 'sha512')
...For more information on the API, please see the relevant Node documentation.
For high performance, use the async variant (pbkdf2.pbkdf2), not pbkdf2.pbkdf2Sync, this variant has the oppurtunity to use window.crypto.subtle when browserified.
This module is a derivative of cryptocoinjs/pbkdf2-sha256, so thanks to JP Richardson for laying the ground work.
Thank you to FangDun Cai for donating the package name on npm, if you’re looking for his previous module it is located at fundon/pbkdf2.
I felt compelled to put this on github and publish to npm. I haven’t tested every other big integer library out there, but the few that I have tested in comparison to this one have not even come close in performance. I am aware of the bi module on npm, however it has been modified and I wanted to publish the original without modifications. This is jsbn and jsbn2 from Tom Wu’s original website above, with the modular pattern applied to prevent global leaks and to allow for use with node.js on the server side.
var BigInteger = require('jsbn');
var a = new BigInteger('91823918239182398123');
alert(a.bitLength()); // 67
returns the base-10 number as a string
returns a new BigInteger equal to the negation of bi
returns new BI of absolute value
Is this value a JS regex? This module works cross-realm/iframe, and despite ES6 @@toStringTag.
var isRegex = require('is-regex');
var assert = require('assert');
assert.notOk(isRegex(undefined));
assert.notOk(isRegex(null));
assert.notOk(isRegex(false));
assert.notOk(isRegex(true));
assert.notOk(isRegex(42));
assert.notOk(isRegex('foo'));
assert.notOk(isRegex(function () {}));
assert.notOk(isRegex([]));
assert.notOk(isRegex({}));
assert.ok(isRegex(/a/g));
assert.ok(isRegex(new RegExp('a', 'g')));Simply clone the repo, npm install, and run npm test
Browser-friendly inheritance fully compatible with standard node.js inherits.
This package exports standard inherits from node.js util module in node environment, but also provides alternative browser-friendly implementation through browser field. Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of util. It also has a shim for old browsers with no Object.create support.
While keeping you sure you are using standard inherits implementation in node.js environment, it allows bundlers such as browserify to not include full util package to your client code if all you need is just inherits function. It worth, because browser shim for util package is large and inherits is often the single function you need from it.
It’s recommended to use this package instead of require('util').inherits for any code that has chances to be used not only in node.js but in browser too.
Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js inherits.
If you are using version ~1.0 and planning to switch to ~2.0, be careful:
super_ instead of super for referencing superclassBrowser-friendly inheritance fully compatible with standard node.js inherits.
This package exports standard inherits from node.js util module in node environment, but also provides alternative browser-friendly implementation through browser field. Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of util. It also has a shim for old browsers with no Object.create support.
While keeping you sure you are using standard inherits implementation in node.js environment, it allows bundlers such as browserify to not include full util package to your client code if all you need is just inherits function. It worth, because browser shim for util package is large and inherits is often the single function you need from it.
It’s recommended to use this package instead of require('util').inherits for any code that has chances to be used not only in node.js but in browser too.
Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js inherits.
If you are using version ~1.0 and planning to switch to ~2.0, be careful:
super_ instead of super for referencing superclassBrowser-friendly inheritance fully compatible with standard node.js inherits.
This package exports standard inherits from node.js util module in node environment, but also provides alternative browser-friendly implementation through browser field. Alternative implementation is a literal copy of standard one located in standalone module to avoid requiring of util. It also has a shim for old browsers with no Object.create support.
While keeping you sure you are using standard inherits implementation in node.js environment, it allows bundlers such as browserify to not include full util package to your client code if all you need is just inherits function. It worth, because browser shim for util package is large and inherits is often the single function you need from it.
It’s recommended to use this package instead of require('util').inherits for any code that has chances to be used not only in node.js but in browser too.
Version ~1.0 had completely different motivation and is not compatible neither with 2.0 nor with standard node.js inherits.
If you are using version ~1.0 and planning to switch to ~2.0, be careful:
super_ instead of super for referencing superclassrecursively find the closest package.json
Say you want to check if the directory name of a project matches its module name in package.json:
const path = require('path')
const findRoot = require('find-root')
// from a starting directory, recursively search for the nearest
// directory containing package.json
const root = findRoot('/Users/jsdnxx/Code/find-root/tests')
// => '/Users/jsdnxx/Code/find-root'
const dirname = path.basename(root)
console.log('is it the same?')
console.log(dirname === require(path.join(root, 'package.json')).name)You can also pass in a custom check function (by default, it checks for the existence of package.json in a directory). In this example, we traverse up to find the root of a git repo:
const fs = require('fs')
const gitRoot = findRoot('/Users/jsdnxx/Code/find-root/tests', function (dir) {
return fs.existsSync(path.resolve(dir, '.git'))
})findRoot: (startingPath : string, check?: (dir: string) => boolean) => stringReturns the path for the nearest directory to startingPath containing a package.json file, eg /foo/module.
If check is provided, returns the path for the closest parent directory where check returns true.
Throws an error if no package.json is found at any level in the startingPath.
From package root:
Parse HTTP X-Forwarded-For header
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Parse the X-Forwarded-For header from the request. Returns an array of the addresses, including the socket address for the req, in reverse order (i.e. index 0 is the socket address and the last index is the furthest address, typically the end-user).
A port of node’s crypto module to the browser.
The goal of this module is to reimplement node’s crypto module, in pure javascript so that it can run in the browser.
Here is the subset that is currently implemented:
these features from node’s crypto are still unimplemented.
If you are interested in writing a feature, please implement as a new module, which will be incorporated into crypto-browserify as a dependency.
All deps must be compatible with node’s crypto (generate example inputs and outputs with node, and save base64 strings inside JSON, so that tests can run in the browser. see sha.js
Crypto is extra serious so please do not hesitate to review the code, and post comments if you do.
Extremely fast HTTP Archive (HAR) validator using JSON Schema.
Please refer to har-cli for more info.
Note: as of v2.0.0 this module defaults to Promise based API. For backward compatibility with v1.x an async/callback API is also provided
npm install ieee754
var ieee754 = require('ieee754')
The ieee754 object has the following functions:
ieee754.read = function (buffer, offset, isLE, mLen, nBytes)
ieee754.write = function (buffer, value, offset, isLE, mLen, nBytes)
The arguments mean the following:
write)The IEEE Standard for Floating-Point Arithmetic (IEEE 754) is a technical standard for floating-point computation. Read more.
Bytes go in, but they don’t come out (when muted).
This is a basic pass-through stream, but when muted, the bytes are silently dropped, rather than being passed through.
var MuteStream = require('mute-stream')
var ms = new MuteStream(options)
ms.pipe(process.stdout)
ms.write('foo') // writes 'foo' to stdout
ms.mute()
ms.write('bar') // does not write 'bar'
ms.unmute()
ms.write('baz') // writes 'baz' to stdout
// can also be used to mute incoming data
var ms = new MuteStream
input.pipe(ms)
ms.on('data', function (c) {
console.log('data: ' + c)
})
input.emit('data', 'foo') // logs 'foo'
ms.mute()
input.emit('data', 'bar') // does not log 'bar'
ms.unmute()
input.emit('data', 'baz') // logs 'baz'All options are optional.
replace Set to a string to replace each character with the specified string when muted. (So you can show **** instead of the password, for example.)
prompt If you are using a replacement char, and also using a prompt with a readline stream (as for a Password: ***** input), then specify what the prompt is so that backspace will work properly. Otherwise, pressing backspace will overwrite the prompt with the replacement character, which is weird.
Set muted to true. Turns .write() into a no-op.
Set muted to false
True if the pipe destination is a TTY, or if the incoming pipe source is a TTY.
The other standard readable and writable stream methods are all available. The MuteStream object acts as a facade to its pipe source and destination.
util.deprecate() function with browser supportIn Node.js, this module simply re-exports the util.deprecate() function.
In the web browser (i.e. via browserify), a browser-specific implementation of the util.deprecate() function is used.
A deprecate() function is the only thing exposed by this module.
// setup:
exports.foo = deprecate(foo, 'foo() is deprecated, use bar() instead');
// users see:
foo();
// foo() is deprecated, use bar() instead
foo();
foo();Node’s constants module for the browser.
To use with browserify cli:
To use with browserify api:
browserify()
.require('constants-browserify', { expose: 'constants' })
.add(__dirname + '/script.js')
.bundle()
// ...With npm do
Port of the OpenBSD bcrypt_pbkdf function to pure Javascript. npm-ified version of Devi Mandiri’s port, with some minor performance improvements. The code is copied verbatim (and un-styled) from Devi’s work.
bcrypt_pbkdf.pbkdf(pass, passlen, salt, saltlen, key, keylen, rounds)Derive a cryptographic key of arbitrary length from a given password and salt, using the OpenBSD bcrypt_pbkdf function. This is a combination of Blowfish and SHA-512.
See this article for further information.
Parameters:
pass, a Uint8Array of length passlenpasslen, an integer Numbersalt, a Uint8Array of length saltlensaltlen, an integer Numberkey, a Uint8Array of length keylen, will be filled with outputkeylen, an integer Numberrounds, an integer Number, number of rounds of the PBKDF to runbcrypt_pbkdf.hash(sha2pass, sha2salt, out)Calculate a Blowfish hash, given SHA2-512 output of a password and salt. Used as part of the inner round function in the PBKDF.
Parameters:
sha2pass, a Uint8Array of length 64sha2salt, a Uint8Array of length 64out, a Uint8Array of length 32, will be filled with outputHTTP verbs that Node.js core’s HTTP parser supports.
This module provides an export that is just like http.METHODS from Node.js core, with the following differences:
http.METHODS export (0.10 and lower).browserify without pulling in the http shim module.This is an array of lower-cased method names that Node.js supports. If Node.js provides the http.METHODS export, then this is the same array lower-cased, otherwise it is a snapshot of the verbs from Node.js 0.10.
A node module that calls a callback when a readable/writable/duplex stream has completed or failed.
npm install end-of-stream
Simply pass a stream and a callback to the eos. Both legacy streams, streams2 and stream3 are supported.
var eos = require('end-of-stream');
eos(readableStream, function(err) {
// this will be set to the stream instance
if (err) return console.log('stream had an error or closed early');
console.log('stream has ended', this === readableStream);
});
eos(writableStream, function(err) {
if (err) return console.log('stream had an error or closed early');
console.log('stream has finished', this === writableStream);
});
eos(duplexStream, function(err) {
if (err) return console.log('stream had an error or closed early');
console.log('stream has ended and finished', this === duplexStream);
});
eos(duplexStream, {readable:false}, function(err) {
if (err) return console.log('stream had an error or closed early');
console.log('stream has finished but might still be readable');
});
eos(duplexStream, {writable:false}, function(err) {
if (err) return console.log('stream had an error or closed early');
console.log('stream has ended but might still be writable');
});
eos(readableStream, {error:false}, function(err) {
// do not treat emit('error', err) as a end-of-stream
});end-of-stream is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.
Abstract base class to inherit from if you want to create streams implementing the same API as node crypto Hash (for Cipher / Decipher check crypto-browserify/cipher-base).
const HashBase = require('hash-base')
const inherits = require('inherits')
// our hash function is XOR sum of all bytes
function MyHash () {
HashBase.call(this, 1) // in bytes
this._sum = 0x00
}
inherits(MyHash, HashBase)
MyHash.prototype._update = function () {
for (let i = 0; i < this._block.length; ++i) this._sum ^= this._block[i]
}
MyHash.prototype._digest = function () {
return this._sum
}
const data = Buffer.from([ 0x00, 0x42, 0x01 ])
const hash = new MyHash().update(data).digest()
console.log(hash) // => 67You also can check source code or crypto-browserify/md5.js
Convert a string of words to a JavaScript identifier
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
var toIdentifier = require('toidentifier')
console.log(toIdentifier('Bad Request'))
// => "BadRequest"This CommonJS module exports a single default function: toIdentifier.
Given a string as the argument, it will be transformed according to the following rules and the new string will be returned:
0x20).[0-9a-z_]) characters.pump is a small node module that pipes streams together and destroys all of them if one of them closes.
npm install pump
When using standard source.pipe(dest) source will not be destroyed if dest emits close or an error. You are also not able to provide a callback to tell when then pipe has finished.
pump does these two things for you
Simply pass the streams you want to pipe together to pump and add an optional callback
var pump = require('pump')
var fs = require('fs')
var source = fs.createReadStream('/dev/random')
var dest = fs.createWriteStream('/dev/null')
pump(source, dest, function(err) {
console.log('pipe finished', err)
})
setTimeout(function() {
dest.destroy() // when dest is closed pump will destroy source
}, 1000)You can use pump to pipe more than two streams together as well
var transform = someTransformStream()
pump(source, transform, anotherTransform, dest, function(err) {
console.log('pipe finished', err)
})If source, transform, anotherTransform or dest closes all of them will be destroyed.
Similarly to stream.pipe(), pump() returns the last stream passed in, so you can do:
return pump(s1, s2) // returns s2
If you want to return a stream that combines both s1 and s2 to a single stream use pumpify instead.
pump is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.
JS-only implementation of HMAC DRBG.
const DRBG = require('hmac-drbg');
const hash = require('hash.js');
const d = new DRBG({
hash: hash.sha256,
entropy: '0123456789abcdef',
nonce: '0123456789abcdef',
pers: '0123456789abcdef' /* or `null` */
});
d.generate(32, 'hex');Buffer (including the browserify Buffer)Buffer.isBuffer?This module lets you check if an object is a Buffer without using Buffer.isBuffer (which includes the whole buffer module in browserify).
It’s future-proof and works in node too!
var isBuffer = require('is-buffer')
isBuffer(new Buffer(4)) // true
isBuffer(undefined) // false
isBuffer(null) // false
isBuffer('') // false
isBuffer(true) // false
isBuffer(false) // false
isBuffer(0) // false
isBuffer(1) // false
isBuffer(1.0) // false
isBuffer('string') // false
isBuffer({}) // false
isBuffer(function foo () {}) // falseIs this value a JS Date object? This module works cross-realm/iframe, and despite ES6 @@toStringTag.
var isDate = require('is-date-object');
var assert = require('assert');
assert.notOk(isDate(undefined));
assert.notOk(isDate(null));
assert.notOk(isDate(false));
assert.notOk(isDate(true));
assert.notOk(isDate(42));
assert.notOk(isDate('foo'));
assert.notOk(isDate(function () {}));
assert.notOk(isDate([]));
assert.notOk(isDate({}));
assert.notOk(isDate(/a/g));
assert.notOk(isDate(new RegExp('a', 'g')));
assert.ok(isDate(new Date()));Simply clone the repo, npm install, and run npm test
Is this value negative zero? === will lie to you.
var isNegativeZero = require('is-negative-zero');
var assert = require('assert');
assert.notOk(isNegativeZero(undefined));
assert.notOk(isNegativeZero(null));
assert.notOk(isNegativeZero(false));
assert.notOk(isNegativeZero(true));
assert.notOk(isNegativeZero(0));
assert.notOk(isNegativeZero(42));
assert.notOk(isNegativeZero(Infinity));
assert.notOk(isNegativeZero(-Infinity));
assert.notOk(isNegativeZero(NaN));
assert.notOk(isNegativeZero('foo'));
assert.notOk(isNegativeZero(function () {}));
assert.notOk(isNegativeZero([]));
assert.notOk(isNegativeZero({}));
assert.ok(isNegativeZero(-0));Simply clone the repo, npm install, and run npm test
The insecure key derivation algorithm from OpenSSL.
WARNING: DO NOT USE, except for compatibility reasons.
MD5 is insecure.
Use at least scrypt or pbkdf2-hmac-sha256 instead.
EVP_BytesToKey(password, salt, keyLen, ivLen)
password - Buffer, password used to derive the key data.salt - 8 byte Buffer or null, salt is used as a salt in the derivation.keyBits - number, key length in bits.ivLen - number, iv length in bytes.Returns: { key: Buffer, iv: Buffer }
MD5 with aes-256-cbc:
const crypto = require('crypto')
const EVP_BytesToKey = require('evp_bytestokey')
const result = EVP_BytesToKey(
'my-secret-password',
null,
32,
16
)
// =>
// { key: <Buffer e3 4f 96 f3 86 24 82 7c c2 5d ff 23 18 6f 77 72 54 45 7f 49 d4 be 4b dd 4f 6e 1b cc 92 a4 27 33>,
// iv: <Buffer 85 71 9a bf ae f4 1e 74 dd 46 b6 13 79 56 f5 5b> }
const cipher = crypto.createCipheriv('aes-256-cbc', result.key, result.iv)Only call a function once.
var once = require('once')
function load (file, cb) {
cb = once(cb)
loader.load('file')
loader.once('load', cb)
loader.once('error', cb)
}Or add to the Function.prototype in a responsible way:
// only has to be done once
require('once').proto()
function load (file, cb) {
cb = cb.once()
loader.load('file')
loader.once('load', cb)
loader.once('error', cb)
}Ironically, the prototype feature makes this module twice as complicated as necessary.
To check whether you function has been called, use fn.called. Once the function is called for the first time the return value of the original function is saved in fn.value and subsequent calls will continue to return this value.
var once = require('once')
function load (cb) {
cb = once(cb)
var stream = createStream()
stream.once('data', cb)
stream.once('end', function () {
if (!cb.called) cb(new Error('not found'))
})
}once.strict(func)Throw an error if the function is called twice.
Some functions are expected to be called only once. Using once for them would potentially hide logical errors.
In the example below, the greet function has to call the callback only once:
function greet (name, cb) {
// return is missing from the if statement
// when no name is passed, the callback is called twice
if (!name) cb('Hello anonymous')
cb('Hello ' + name)
}
function log (msg) {
console.log(msg)
}
// this will print 'Hello anonymous' but the logical error will be missed
greet(null, once(msg))
// once.strict will print 'Hello anonymous' and throw an error when the callback will be called the second time
greet(null, once.strict(msg))Node style SHA on pure JavaScript.
var shajs = require('sha.js')
console.log(shajs('sha256').update('42').digest('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049
console.log(new shajs.sha256().update('42').digest('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049
var sha256stream = shajs('sha256')
sha256stream.end('42')
console.log(sha256stream.read().toString('hex'))
// => 73475cb40a568e8da8a045ced110137e159f890ac4da883b6b17dc651b3a8049sha.js currently implements:
Note, this doesn’t actually implement a stream, but wrapping this in a stream is trivial. It does update incrementally, so you can hash things larger than RAM, as it uses a constant amount of memory (except when using base64 or utf8 encoding, see code comments).
This work is derived from Paul Johnston’s A JavaScript implementation of the Secure Hash Algorithm.
Compressible Content-Type / mime checking.
Checks if the given Content-Type is compressible. The type argument is expected to be a value MIME type or Content-Type string, though no validation is performed.
The MIME is looked up in the mime-db and if there is compressible information in the database entry, that is returned. Otherwise, this module will fallback to true for the following types:
text/**/*+json*/*+text*/*+xmlIf this module is not sure if a type is specifically compressible or specifically uncompressible, undefined is returned.
Node-core v8.9.4 string_decoder for userland
Node-core string_decoder for userland
This package is a mirror of the string_decoder implementation in Node-core.
Full documentation may be found on the Node.js website.
As of version 1.0.0 string_decoder uses semantic versioning.
Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10.
The build/ directory contains a build script that will scrape the source from the nodejs/node repo given a specific Node version.
string_decoder is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:
readable-stream to be included in Node.js.See readable-stream for more details.
Node-core v8.9.4 string_decoder for userland
Node-core string_decoder for userland
This package is a mirror of the string_decoder implementation in Node-core.
Full documentation may be found on the Node.js website.
As of version 1.0.0 string_decoder uses semantic versioning.
Previous version numbers match the versions found in Node core, e.g. 0.10.24 matches Node 0.10.24, likewise 0.11.10 matches Node 0.11.10.
The build/ directory contains a build script that will scrape the source from the nodejs/node repo given a specific Node version.
string_decoder is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:
readable-stream to be included in Node.js.See readable-stream for more details.
var hasSymbols = require('has-symbols');
hasSymbols() === true; // if the environment has native Symbol support. Not polyfillable, not forgeable.
var hasSymbolsKinda = require('has-symbols/shams');
hasSymbolsKinda() === true; // if the environment has a Symbol sham that mostly follows the spec.Simply clone the repo, npm install, and run npm test
Package that contains a collection of Dojo themes.
Please Note: If you are looking for Dojo 1 themes, these have been relocated to @dojo/dijit-themes. The github url registered with bower has also been updated to point to the new repository, if you encounter any issues please run bower cache clean and try again.
@dojo/themes with npm i @dojo/themes.main.css: @import '~@dojo/themes/dojo/index.css.@dojo/themes with npm i @dojo/themes.index.html: <link rel="stylesheet" href="node_modules/@dojo/themes/dojo/dojo-{version}.css">.index.html: <script src="node_modules/@dojo/themes/dojo/dojo-{version}.js"></script>.To compose and extend the themes within a dojo project, run npm i @dojo/themes and use the css-module composes functionality. Variables can be used by using @import to import the variables.css file from a theme. This functionality is added by a post-css plugin within the dojo build command.
/* myButton.m.css */
@import '@dojo/themes/dojo/variables.css';
.root {
composes: root from '@dojo/themes/dojo/button.m.css';
background-color: var(--dojo-green);
}The following npm scripts are available to facilitate development:
build:tcm: generate .m.css.d.ts fileswatch: generate .m.css.d.ts files in watch modeAdditional ESLint rules for ESLint directive comments (e.g. //eslint-disable-line).
eslint-plugin-eslint-comments follows semantic versioning and ESLint’s Semantic Versioning Policy.
Welcome contributing!
Please use GitHub’s Issues/PRs.
npm test runs tests and measures coverage.npm run build updates README.md, index.js, and the header of all rule’s documents.npm run clean removes the coverage of the last npm test command.npm run coverage shows the coverage of the last npm test command.npm run lint runs ESLint for this codebase.npm run watch runs tests and measures coverage when source code are changed.Is this value a JS String object or primitive? This module works cross-realm/iframe, and despite ES6 @@toStringTag.
var isString = require('is-string');
var assert = require('assert');
assert.notOk(isString(undefined));
assert.notOk(isString(null));
assert.notOk(isString(false));
assert.notOk(isString(true));
assert.notOk(isString(function () {}));
assert.notOk(isString([]));
assert.notOk(isString({}));
assert.notOk(isString(/a/g));
assert.notOk(isString(new RegExp('a', 'g')));
assert.notOk(isString(new Date()));
assert.notOk(isString(42));
assert.notOk(isString(NaN));
assert.notOk(isString(Infinity));
assert.notOk(isString(new Number(42)));
assert.ok(isString('foo'));
assert.ok(isString(Object('foo')));Simply clone the repo, npm install, and run npm test
Tiny package for detecting reserved words.
Reserved Word is either a Keyword, or a Future Reserved Word, or a Null Literal, or a Boolean Literal. See: ES5 #7.6.1 and ES6 #11.6.2.
npm install reserved-words
Returns true if provided identifier string is a Reserved Word in some ECMAScript dialect (ECMA-262 edition).
If the strict flag is truthy, this function additionally checks whether word is a Keyword or Future Reserved Word under strict mode.
var reserved = require('reserved-words');
reserved.check('volatile', 'es3'); // true
reserved.check('volatile', 'es2015'); // false
reserved.check('yield', 3); // false
reserved.check('yield', 6); // true
Represents ECMA-262 3rd edition.
See section 7.5.1.
Represents ECMA-262 5th edition (ECMAScript 5.1).
Reserved Words are formally defined in ECMA262 sections 7.6.1.1 and 7.6.1.2.
Represents ECMA-262 6th edition.
Reserved Words are formally defined in sections 11.6.2.1 and 11.6.2.2.
Does a JS type have a getter/setter property
npm install --save is-get-set-prop
import isGetSetProp from 'is-get-set-prop';
isGetSetProp('array', 'length');
// => true
isGetSetProp('ARRAY', 'push');
// => false
// is-get-set-prop can only verify native JS types
isGetSetProp('gulp', 'task');
// => false;var isGetSetProp = require('is-get-set-prop');
isGetSetProp('array', 'length');
// => true
isGetSetProp('ARRAY', 'push');
// => false
// is-get-set-prop can only verify native JS types
isGetSetProp('customObject', 'customGetterOrSetter');
// => false;Type: string
A native JS type to examine. Note: is-get-set-prop can only verify native JS types.
Type: string
Property name to determine if a getter/setter of type.
Translate between JOSE and ASN.1/DER encodings for ECDSA signatures
var format = require('ecdsa-sig-formatter');
var derSignature = '..'; // asn.1/DER encoded ecdsa signature
var joseSignature = format.derToJose(derSignature);.derToJose(Buffer|String signature, String alg) -> StringConvert the ASN.1/DER encoded signature to a JOSE-style concatenated signature. Returns a base64 url encoded String.
String, it should be base64 encoded.joseToDer(Buffer|String signature, String alg) -> BufferConvert the JOSE-style concatenated signature to an ASN.1/DER encoded signature. Returns a Buffer
String, it should be base64 url encodedFork the repository. Committing directly against this repository is highly discouraged.
Make your modifications in a branch, updating and writing new unit tests as necessary in the spec directory.
Ensure that all tests pass with npm test
rebase your changes against master. Do not merge.
Submit a pull request to this repository. Wait for tests to run and someone to chime in.
This repository is configured with EditorConfig and ESLint rules.
Very minimal utils that are required in order to write reasonable JS-only crypto module.
const utils = require('minimalistic-crypto-utils');
utils.toArray('abcd', 'hex');
utils.encode([ 1, 2, 3, 4 ], 'hex');
utils.toHex([ 1, 2, 3, 4 ]);Array#isArray for older browsers.
var isArray = require('isarray');
console.log(isArray([])); // => true
console.log(isArray({})); // => falseWith npm do
Then bundle for the browser with browserify.
With component do
Combine an array of streams into a single duplex stream using pump and duplexify. If one of the streams closes/errors all streams in the pipeline will be destroyed.
npm install pumpify
Pass the streams you want to pipe together to pumpify pipeline = pumpify(s1, s2, s3, ...). pipeline is a duplex stream that writes to the first streams and reads from the last one. Streams are piped together using pump so if one of them closes all streams will be destroyed.
var pumpify = require('pumpify')
var tar = require('tar-fs')
var zlib = require('zlib')
var fs = require('fs')
var untar = pumpify(zlib.createGunzip(), tar.extract('output-folder'))
// you can also pass an array instead
// var untar = pumpify([zlib.createGunzip(), tar.extract('output-folder')])
fs.createReadStream('some-gzipped-tarball.tgz').pipe(untar)If you are pumping object streams together use pipeline = pumpify.obj(s1, s2, ...). Call pipeline.destroy() to destroy the pipeline (including the streams passed to pumpify).
setPipeline(s1, s2, ...)Similar to duplexify you can also define the pipeline asynchronously using setPipeline(s1, s2, ...)
var untar = pumpify()
setTimeout(function() {
// will start draining the input now
untar.setPipeline(zlib.createGunzip(), tar.extract('output-folder'))
}, 1000)
fs.createReadStream('some-gzipped-tarball.tgz').pipe(untar)pumpify is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.

Emulate console for all the browsers
You usually do not have to install console-browserify yourself! If your code runs in Node.js, console is built in. If your code runs in the browser, bundlers like browserify or webpack also include the console-browserify module when you do require('console').
But if none of those apply, with npm do:
npm install console-browserify
var console = require("console")
// Or when manually using console-browserify directly:
// var console = require("console-browserify")
console.log("hello world!")See the Node.js Console docs. console-browserify does not support creating new Console instances and does not support the Inspector-only methods.
PRs are very welcome! The main way to contribute to console-browserify is by porting features, bugfixes and tests from Node.js. Ideally, code contributions to this module are copy-pasted from Node.js and transpiled to ES5, rather than reimplemented from scratch. Matching the Node.js code as closely as possible makes maintenance simpler when new changes land in Node.js. This module intends to provide exactly the same API as Node.js, so features that are not available in the core console module will not be accepted. Feature requests should instead be directed at nodejs/node and will be added to this module once they are implemented in Node.js.
If there is a difference in behaviour between Node.js’s console module and this module, please open an issue!
Convert a string to pascal-case.
Install with npm
var pascalcase = require('pascalcase');
pascalcase('a');
//=> 'A'
pascalcase('foo bar baz');
//=> 'FooBarBaz'
pascalcase('foo_bar-baz');
//=> 'FooBarBaz'
pascalcase('foo.bar.baz');
//=> 'FooBarBaz'
pascalcase('foo/bar/baz');
//=> 'FooBarBaz'
pascalcase('foo[bar)baz');
//=> 'FooBarBaz'
pascalcase('#foo+bar*baz');
//=> 'FooBarBaz'
pascalcase('$foo~bar`baz');
//=> 'FooBarBaz'
pascalcase('_foo_bar-baz-');
//=> 'FooBarBaz'Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue
Jon Schlinkert
This file was generated by verb-cli on August 19, 2015. # responselike
A response-like object for mocking a Node.js HTTP response stream
Returns a streamable response object similar to a Node.js HTTP response stream. Useful for formatting cached responses so they can be consumed by code expecting a real response.
npm install --save responselike
Or if you’re just using for testing you’ll want:
npm install --save-dev responselike
const Response = require('responselike');
const response = new Response(200, { foo: 'bar' }, Buffer.from('Hi!'), 'https://example.com');
response.statusCode;
// 200
response.headers;
// { foo: 'bar' }
response.body;
// <Buffer 48 69 21>
response.url;
// 'https://example.com'
response.pipe(process.stdout);
// Hi!Returns a streamable response object similar to a Node.js HTTP response stream.
Type: number
HTTP response status code.
Type: object
HTTP headers object. Keys will be automatically lowercased.
Type: buffer
A Buffer containing the response body. The Buffer contents will be streamable but is also exposed directly as response.body.
Type: string
Request URL string.
This is plugin for Acorn - a tiny, fast JavaScript parser, written completely in JavaScript.
It was created as an experimental alternative, faster React.js JSX parser. Later, it replaced the official parser and these days is used by many prominent development tools.
Please note that this tool only parses source code to JSX AST, which is useful for various language tools and services. If you want to transpile your code to regular ES5-compliant JavaScript with source map, check out Babel and Buble transpilers which use acorn-jsx under the hood.
Requiring this module provides you with an Acorn plugin that you can use like this:
var acorn = require("acorn");
var jsx = require("acorn-jsx");
acorn.Parser.extend(jsx()).parse("my(<jsx/>, 'code');");Note that official spec doesn’t support mix of XML namespaces and object-style access in tag names (#27) like in <namespace:Object.Property />, so it was deprecated in acorn-jsx@3.0. If you still want to opt-in to support of such constructions, you can pass the following option:
Also, since most apps use pure React transformer, a new option was introduced that allows to prohibit namespaces completely:
Note that by default allowNamespaces is enabled for spec compliancy.
ESLint plugin which disallows each ECMAScript syntax.
Espree, the default parser of ESLint, has supported ecmaVersion option. However, it doesn’t support to enable each syntactic feature individually.
This plugin lets us disable each syntactic feature individually. So we can enable arbitrary syntactic features with the combination of ecmaVersion and this plugin.
See documentation
This plugin follows semantic versioning and ESLint’s semantic versioning policy.
See releases.
Welcome contributing!
Please use GitHub’s Issues/PRs.
npm test runs tests and measures coverage.npm run clean removes the coverage result of npm test command.npm run coverage shows the coverage result of the last npm test command.npm run docs:build builds documentation.npm run docs:watch builds documentation on each file change.npm run watch runs tests on each file change.Extend an object with the properties of additional objects. node.js/javascript util.
Install with npm
Pass an empty object to shallow clone:
Object constructor.Install dev dependencies:
Jon Schlinkert
This file was generated by verb-cli on June 29, 2015. # extend-shallow
Extend an object with the properties of additional objects. node.js/javascript util.
Install with npm
Pass an empty object to shallow clone:
Object constructor.Install dev dependencies:
Jon Schlinkert
This file was generated by verb-cli on June 29, 2015. # extend-shallow
Extend an object with the properties of additional objects. node.js/javascript util.
Install with npm
Pass an empty object to shallow clone:
Object constructor.Install dev dependencies:
Jon Schlinkert
This file was generated by verb-cli on June 29, 2015. # extend-shallow
Extend an object with the properties of additional objects. node.js/javascript util.
Install with npm
Pass an empty object to shallow clone:
Object constructor.Install dev dependencies:
Jon Schlinkert
This file was generated by verb-cli on June 29, 2015. # extend-shallow
Extend an object with the properties of additional objects. node.js/javascript util.
Install with npm
Pass an empty object to shallow clone:
Object constructor.Install dev dependencies:
Jon Schlinkert
This file was generated by verb-cli on June 29, 2015. # extend-shallow
Extend an object with the properties of additional objects. node.js/javascript util.
Install with npm
Pass an empty object to shallow clone:
Object constructor.Install dev dependencies:
Jon Schlinkert
This file was generated by verb-cli on June 29, 2015. # registry-auth-token
Get the auth token set for an npm registry from .npmrc. Also allows fetching the configured registry URL for a given npm scope.
npm install --save registry-auth-token
Returns an object containing token and type, or undefined if no token can be found. type can be either Bearer or Basic.
var getAuthToken = require('registry-auth-token')
var getRegistryUrl = require('registry-auth-token/registry-url')
// Get auth token and type for default `registry` set in `.npmrc`
console.log(getAuthToken()) // {token: 'someToken', type: 'Bearer'}
// Get auth token for a specific registry URL
console.log(getAuthToken('//registry.foo.bar'))
// Find the registry auth token for a given URL (with deep path):
// If registry is at `//some.host/registry`
// URL passed is `//some.host/registry/deep/path`
// Will find token the closest matching path; `//some.host/registry`
console.log(getAuthToken('//some.host/registry/deep/path', {recursive: true}))
// Find the configured registry url for scope `@foobar`.
// Falls back to the global registry if not defined.
console.log(getRegistryUrl('@foobar'))
// Use the npm config that is passed in
console.log(getRegistryUrl('http://registry.foobar.eu/', {
npmrc: {
'registry': 'http://registry.foobar.eu/',
'//registry.foobar.eu/:_authToken': 'qar'
}
}))// If auth info can be found:
{token: 'someToken', type: 'Bearer'}
// Or:
{token: 'someOtherToken', type: 'Basic'}
// Or, if nothing is found:
undefinedPlease be careful when using this. Leaking your auth token is dangerous.
ECMAScript “ToPrimitive” algorithm. Provides ES5 and ES2015 versions. When different versions of the spec conflict, the default export will be the latest version of the abstract operation. Alternative versions will also be available under an es5/es2015 exported property if you require a specific version.
var toPrimitive = require('es-to-primitive');
var assert = require('assert');
assert(toPrimitive(function () {}) === String(function () {}));
var date = new Date();
assert(toPrimitive(date) === String(date));
assert(toPrimitive({ valueOf: function () { return 3; } }) === 3);
assert(toPrimitive(['a', 'b', 3]) === String(['a', 'b', 3]));
var sym = Symbol();
assert(toPrimitive(Object(sym)) === sym);Simply clone the repo, npm install, and run npm test
Make a callback- or promise-based function support both promises and callbacks.
Uses the native promise implementation.
universalify.fromCallback(fn)Takes a callback-based function to universalify, and returns the universalified function.
Function must take a callback as the last parameter that will be called with the signature (error, result). universalify does not support calling the callback with three or more arguments, and does not ensure that the callback is only called once.
function callbackFn (n, cb) {
setTimeout(() => cb(null, n), 15)
}
const fn = universalify.fromCallback(callbackFn)
// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))
// Works with Callbacks:
fn('Hi!', (error, result) => {
if (error) return console.error(error)
console.log(result)
// -> Hi!
})universalify.fromPromise(fn)Takes a promise-based function to universalify, and returns the universalified function.
Function must return a valid JS promise. universalify does not ensure that a valid promise is returned.
function promiseFn (n) {
return new Promise(resolve => {
setTimeout(() => resolve(n), 15)
})
}
const fn = universalify.fromPromise(promiseFn)
// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))
// Works with Callbacks:
fn('Hi!', (error, result) => {
if (error) return console.error(error)
console.log(result)
// -> Hi!
})Make a callback- or promise-based function support both promises and callbacks.
Uses the native promise implementation.
universalify.fromCallback(fn)Takes a callback-based function to universalify, and returns the universalified function.
Function must take a callback as the last parameter that will be called with the signature (error, result). universalify does not support calling the callback with three or more arguments, and does not ensure that the callback is only called once.
function callbackFn (n, cb) {
setTimeout(() => cb(null, n), 15)
}
const fn = universalify.fromCallback(callbackFn)
// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))
// Works with Callbacks:
fn('Hi!', (error, result) => {
if (error) return console.error(error)
console.log(result)
// -> Hi!
})universalify.fromPromise(fn)Takes a promise-based function to universalify, and returns the universalified function.
Function must return a valid JS promise. universalify does not ensure that a valid promise is returned.
function promiseFn (n) {
return new Promise(resolve => {
setTimeout(() => resolve(n), 15)
})
}
const fn = universalify.fromPromise(promiseFn)
// Works with Promises:
fn('Hello World!')
.then(result => console.log(result)) // -> Hello World!
.catch(error => console.error(error))
// Works with Callbacks:
fn('Hi!', (error, result) => {
if (error) return console.error(error)
console.log(result)
// -> Hi!
})jQuery is a fast, small, and feature-rich JavaScript library.
For information on how to get started and how to use jQuery, please see jQuery’s documentation. For source files and issues, please visit the jQuery repo.
If upgrading, please see the blog post for 3.5.1. This includes notable differences from the previous version and a more readable changelog.
Below are some of the most common ways to include jQuery.
Babel is a next generation JavaScript compiler. One of the features is the ability to use ES6/ES2015 modules now, even though browsers do not yet support this feature natively.
There are several ways to use Browserify and Webpack. For more information on using these tools, please refer to the corresponding project’s documentation. In the script, including jQuery will usually look like this…
AMD is a module format built for the browser. For more information, we recommend require.js’ documentation.
To include jQuery in Node, first install with npm.
For jQuery to work in Node, a window with a document is required. Since no such window exists natively in Node, one can be mocked by tools such as jsdom. This can be useful for testing purposes.
const { JSDOM } = require( "jsdom" );
const { window } = new JSDOM( "" );
const $ = require( "jquery" )( window );Does a JS type’s prototype have a property
Uses Sindre Sorhus’ proto-props
npm install --save is-proto-prop
import isProtoProp from 'is-proto-prop';
isProtoProp('array', 'length');
// => true
isProtoProp('Error', 'ignore');
// => false
// `is-proto-props` can only verify native JS types
isProtoProp('gulp', 'task');
// => falsevar isProtoProp = require('is-proto-prop');
isProtoProp('array', 'length');
// => true
isProtoProp('Error', 'ignore');
// => false
// `is-proto-props` can only verify native JS types
isProtoProp('gulp', 'task');
// => falseReturns a Boolean if propertyName is on type’s prototype.
type: string
JS type to examine the prototype of. Note: is-proto-prop only looks at native JS types.
type: string
Property name to look for on type’s prototype. Note: propertyName is case sensitive.

Implements a function similar to performance.now (based on process.hrtime).
Modern browsers have a window.performance object with - among others - a now method which gives time in milliseconds, but with sub-millisecond precision. This module offers the same function based on the Node.js native process.hrtime function.
According to the High Resolution Time specification, the number of milliseconds reported by performance.now should be relative to the value of performance.timing.navigationStart.
In the current version of the module (2.0) the reported time is relative to the time the current Node process has started (inferred from process.uptime()).
Version 1.0 reported a different time. The reported time was relative to the time the module was loaded (i.e. the time it was first required). If you need this functionality, version 1.0 is still available on NPM.
var now = require("performance-now")
var start = now()
var end = now()
console.log(start.toFixed(3)) // the number of milliseconds the current node process is running
console.log((start-end).toFixed(3)) // ~ 0.002 on my systemRunning the now function two times right after each other yields a time difference of a few microseconds. Given this overhead, I think it’s best to assume that the precision of intervals computed with this method is not higher than 10 microseconds, if you don’t know the exact overhead on your own system.
ECMAScript spec abstract operations. When different versions of the spec conflict, the default export will be the latest version of the abstract operation. All abstract operations will also be available under an es5/es2015/es2016/es2017/es2018/es2019 entry point, and exported property, if you require a specific version.
var ES = require('es-abstract');
var assert = require('assert');
assert(ES.isCallable(function () {}));
assert(!ES.isCallable(/a/g));Simply clone the repo, npm install, and run npm test
Please email [@ljharb](https://github.com/ljharb) or see https://tidelift.com/security if you have a potential security vulnerability to report.
Is this JS value callable? Works with Functions and GeneratorFunctions, despite ES6 @@toStringTag.
var isCallable = require('is-callable');
var assert = require('assert');
assert.notOk(isCallable(undefined));
assert.notOk(isCallable(null));
assert.notOk(isCallable(false));
assert.notOk(isCallable(true));
assert.notOk(isCallable([]));
assert.notOk(isCallable({}));
assert.notOk(isCallable(/a/g));
assert.notOk(isCallable(new RegExp('a', 'g')));
assert.notOk(isCallable(new Date()));
assert.notOk(isCallable(42));
assert.notOk(isCallable(NaN));
assert.notOk(isCallable(Infinity));
assert.notOk(isCallable(new Number(42)));
assert.notOk(isCallable('foo'));
assert.notOk(isCallable(Object('foo')));
assert.ok(isCallable(function () {}));
assert.ok(isCallable(function* () {}));
assert.ok(isCallable(x => x * x));Install with
npm install is-callable
Simply clone the repo, npm install, and run npm test
Clone a Node.js HTTP response stream
Returns a new stream and copies over all properties and methods from the original response giving you a complete duplicate.
This is useful in situations where you need to consume the response stream but also want to pass an unconsumed stream somewhere else to be consumed later.
npm install --save clone-response
const http = require('http');
const cloneResponse = require('clone-response');
http.get('http://example.com', response => {
const clonedResponse = cloneResponse(response);
response.pipe(process.stdout);
setImmediate(() => {
// The response stream has already been consumed by the time this executes,
// however the cloned response stream is still available.
doSomethingWithResponse(clonedResponse);
});
});Please bear in mind that the process of cloning a stream consumes it. However, you can consume a stream multiple times in the same tick, therefore allowing you to create multiple clones. e.g:
const clone1 = cloneResponse(response);
const clone2 = cloneResponse(response);
// response can still be consumed in this tick but cannot be consumed if passed
// into any async callbacks. clone1 and clone2 can be passed around and be
// consumed in the future.Returns a clone of the passed in response.
Type: stream
A Node.js HTTP response stream to clone.
JSON Schema for HTTP Archive (HAR).
Compatible with any JSON Schema validation tool.
Comprehensive MIME type mapping API based on mime-db module.
Install with npm:
npm install mime
npm run test
mime [path_string]
E.g.
> mime scripts/jquery.js
application/javascript
Get the mime type associated with a file, if no mime type is found application/octet-stream is returned. Performs a case-insensitive lookup using the extension in path (the substring after the last ‘/’ or ‘.’). E.g.
var mime = require('mime');
mime.lookup('/path/to/file.txt'); // => 'text/plain'
mime.lookup('file.txt'); // => 'text/plain'
mime.lookup('.TXT'); // => 'text/plain'
mime.lookup('htm'); // => 'text/html'Sets the mime type returned when mime.lookup fails to find the extension searched for. (Default is application/octet-stream.)
Get the default extension for type
Map mime-type to charset
(The logic for charset lookups is pretty rudimentary. Feel free to suggest improvements.)
Custom type mappings can be added on a per-project basis via the following APIs.
Add custom mime/extension mappings
mime.define({
'text/x-some-format': ['x-sf', 'x-sft', 'x-sfml'],
'application/x-my-type': ['x-mt', 'x-mtt'],
// etc ...
});
mime.lookup('x-sft'); // => 'text/x-some-format'The first entry in the extensions array is returned by mime.extension(). E.g.
Load mappings from an Apache “.types” format file
The .types file format is simple - See the types dir for examples.
v8-compile-cache attaches a require hook to use V8’s code cache to speed up instantiation time. The “code cache” is the work of parsing and compiling done by V8.
The ability to tap into V8 to produce/consume this cache was introduced in Node v5.7.0.
Requiring v8-compile-cache in Node <5.7.0 is a noop – but you need at least Node 4.0.0 to support the ES2015 syntax used by v8-compile-cache.
Set the environment variable DISABLE_V8_COMPILE_CACHE=1 to disable the cache.
Cache directory is defined by environment variable V8_COMPILE_CACHE_CACHE_DIR or defaults to <os.tmpdir()>/v8-compile-cache-<V8_VERSION>.
Cache files are suffixed .BLOB and .MAP corresponding to the entry module that required v8-compile-cache. The cache is entry module specific because it is faster to load the entire code cache into memory at once, than it is to read it from disk on a file-by-file basis.
See https://github.com/zertosh/v8-compile-cache/tree/master/bench.
Load Times:
| Module | Without Cache | With Cache |
|---|---|---|
babel-core |
218ms |
185ms |
yarn |
153ms |
113ms |
yarn (bundled) |
228ms |
105ms |
^ Includes the overhead of loading the cache itself.
FileSystemBlobStore and NativeCompileCache are based on Atom’s implementation of their v8 compile cache:
mkdirpSync is based on:
ESQuery is a library for querying the AST output by Esprima for patterns of syntax using a CSS style selector system. Check out the demo:
The following selectors are supported: * AST node type: ForStatement * wildcard: * * attribute existence: [attr] * attribute value: [attr="foo"] or [attr=123] * attribute regex: [attr=/foo.*/] or (with flags) [attr=/foo.*/is] * attribute conditions: [attr!="foo"], [attr>2], [attr<3], [attr>=2], or [attr<=3] * nested attribute: [attr.level2="foo"] * field: FunctionDeclaration > Identifier.id * First or last child: :first-child or :last-child * nth-child (no ax+b support): :nth-child(2) * nth-last-child (no ax+b support): :nth-last-child(1) * descendant: ancestor descendant * child: parent > child * following sibling: node ~ sibling * adjacent sibling: node + adjacent * negation: :not(ForStatement) * has: :has(ForStatement) * matches-any: :matches([attr] > :first-child, :last-child) * subject indicator: !IfStatement > [name="foo"] * class of AST node: :statement, :expression, :declaration, :function, or :pattern
This is a fast polyfill for TextEncoder and TextDecoder, which let you encode and decode JavaScript strings into UTF-8 bytes.
It is fast partially as it does not support any encodings aside UTF-8 (and note that natively, only TextDecoder supports alternative encodings anyway). See some benchmarks.
Install as “fast-text-encoding” via your favourite package manager.
You only need this polyfill if you’re supporting older browsers like IE, legacy Edge, ancient Chrome and Firefox, or Node before v11.
Include the minified code inside a script tag or as an ES6 Module for its side effects. It will create TextEncoder and TextDecoder if the symbols are missing on window or global.
<script src="node_modules/fast-text-encoding/text.min.js"></script>
<script type="module">
import './node_modules/fast-text-encoding/text.min.js';
import 'fast-text-encoding'; // or perhaps this
// confidently do something with TextEncoder or TextDecoder \o/
</script>⚠️ You’ll probably want to depend on text.min.js, as it’s compiled to ES5 for older environments.
You only need this polyfill in Node before v11. However, you can use Buffer to provide the same functionality (but not conforming to any spec) in versions even older than that.
require('fast-text-encoding'); // just require me before use
const buffer = new TextEncoder().encode('Turn me into UTF-8!');
// buffer is now a Uint8Array of [84, 117, 114, 110, ...]In Node v5.1 and above, this polyfill uses Buffer to implement TextDecoder.
Compile code with Closure Compiler.
// ==ClosureCompiler==
// @compilation_level ADVANCED_OPTIMIZATIONS
// @output_file_name text.min.js
// ==/ClosureCompiler==
// code here
Destroy a stream.
This module is meant to ensure a stream gets destroyed, handling different APIs and Node.js bugs.
Destroy the given stream. In most cases, this is identical to a simple stream.destroy() call. The rules are as follows for a given stream:
stream is an instance of ReadStream, then call stream.destroy() and add a listener to the open event to call stream.close() if it is fired. This is for a Node.js bug that will leak a file descriptor if .destroy() is called before open.stream is not an instance of Stream, then nothing happens.stream has a .destroy() method, then call it.The function returns the stream passed in as the argument.
var destroy = require('destroy')
var fs = require('fs')
var stream = fs.createReadStream('package.json')
// ... and later
destroy(stream)json-parse-better-errors is a Node.js library for getting nicer errors out of JSON.parse(), including context and position of the parse errors.
npm install --save json-parse-better-errors
const parseJson = require('json-parse-better-errors')
parseJson('"foo"')
parseJson('garbage') // more useful error messageThe npm team enthusiastically welcomes contributions and project participation! There’s a bunch of things you can do if you want to contribute! The Contributor Guide has all the information you need for everything from reporting bugs to contributing entire new features. Please don’t hesitate to jump in if you’d like to, or even ask us questions if something isn’t clear.
All participants and maintainers in this project are expected to follow Code of Conduct, and just generally be excellent to each other.
Please refer to the Changelog for project history details, too.
Happy hacking!
> parse(txt, ?reviver, ?context=20)Works just like JSON.parse, but will include a bit more information when an error happens.
An ES2017 spec-compliant Object.values shim. Invoke its “shim” method to shim Object.values if it is unavailable or noncompliant.
This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec.
Most common usage:
var assert = require('assert');
var values = require('object.values');
var obj = { a: 1, b: 2, c: 3 };
var expected = [1, 2, 3];
if (typeof Symbol === 'function' && typeof Symbol() === 'symbol') {
// for environments with Symbol support
var sym = Symbol();
obj[sym] = 4;
obj.d = sym;
expected.push(sym);
}
assert.deepEqual(values(obj), expected);
if (!Object.values) {
values.shim();
}
assert.deepEqual(Object.values(obj), expected);Simply clone the repo, npm install, and run npm test
Timings for HTTP requests
Inspired by the request package.
'use strict';
const https = require('https');
const timer = require('@szmarczak/http-timer');
const request = https.get('https://httpbin.org/anything');
const timings = timer(request);
request.on('response', response => {
response.on('data', () => {}); // Consume the data somehow
response.on('end', () => {
console.log(timings);
});
});
// { start: 1535708511443,
// socket: 1535708511444,
// lookup: 1535708511444,
// connect: 1535708511582,
// upload: 1535708511887,
// response: 1535708512037,
// end: 1535708512040,
// phases:
// { wait: 1,
// dns: 0,
// tcp: 138,
// request: 305,
// firstByte: 150,
// download: 3,
// total: 597 } }Returns: Object
start - Time when the request started.socket - Time when a socket was assigned to the request.lookup - Time when the DNS lookup finished.connect - Time when the socket successfully connected.upload - Time when the request finished uploading.response - Time when the request fired the response event.end - Time when the response fired the end event.error - Time when the request fired the error event.phases
wait - timings.socket - timings.startdns - timings.lookup - timings.sockettcp - timings.connect - timings.lookuprequest - timings.upload - timings.connectfirstByte - timings.response - timings.uploaddownload - timings.end - timings.responsetotal - timings.end - timings.start or timings.error - timings.startNote: The time is a number representing the milliseconds elapsed since the UNIX epoch.
An ES2019-spec-compliant String.prototype.trimEnd shim. Invoke its “shim” method to shim String.prototype.trimEnd if it is unavailable.
This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec. In an ES6 environment, it will also work properly with Symbols.
Most common usage:
var trimEnd = require('string.prototype.trimend');
assert(trimEnd(' \t\na \t\n') === 'a \t\n');
if (!String.prototype.trimEnd) {
trimEnd.shim();
}
assert(trimEnd(' \t\na \t\n ') === ' \t\na \t\n '.trimEnd());Simply clone the repo, npm install, and run npm test
Range header field parser.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Parse the given header string where size is the maximum size of the resource. An array of ranges will be returned or negative numbers indicating an error parsing.
-2 signals a malformed header string-1 signals an unsatisfiable range// parse header from request
var range = parseRange(size, req.headers.range)
// the type of the range
if (range.type === 'bytes') {
// the ranges
range.forEach(function (r) {
// do something with r.start and r.end
})
}These properties are accepted in the options object.
Specifies if overlapping & adjacent ranges should be combined, defaults to false. When true, ranges will be combined and returned as if they were specified that way in the header.
parseRange(100, 'bytes=50-55,0-10,5-10,56-60', { combine: true })
// => [
// { start: 0, end: 10 },
// { start: 50, end: 60 }
// ]An ES2019-spec-compliant String.prototype.trimStart shim. Invoke its “shim” method to shim String.prototype.trimStart if it is unavailable.
This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec. In an ES6 environment, it will also work properly with Symbols.
Most common usage:
var trimStart = require('string.prototype.trimstart');
assert(trimStart(' \t\na \t\n') === 'a \t\n');
if (!String.prototype.trimStart) {
trimStart.shim();
}
assert(trimStart(' \t\na \t\n') === ' \t\na \t\n'.trimStart());Simply clone the repo, npm install, and run npm test
Return true if a file path contains the given path.
Install with npm
true
All of the following return true:
containsPath('./a/b/c', 'a');
containsPath('./a/b/c', 'a/b');
containsPath('./b/a/b/c', 'a/b');
containsPath('/a/b/c', '/a/b');
containsPath('/a/b/c', 'a/b');
containsPath('a', 'a');
containsPath('a/b/c', 'a');
//=> truefalse
All of the following return false:
containsPath('abc', 'a');
containsPath('abc', 'a.md');
containsPath('./b/a/b/c', './a/b');
containsPath('./b/a/b/c', './a');
containsPath('./b/a/b/c', '/a/b');
containsPath('/b/a/b/c', '/a/b');
//=> falsetrue if the given string or array ends with suffix using strict equality for… moretrue if the path appears to be relative.true if a file path ends with the given string/suffix.Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue
Jon Schlinkert
This file was generated by verb-cli on July 07, 2015. # media-typer
Simple RFC 6838 media type parser
Parse a media type string. This will return an object with the following properties (examples are shown for the string 'image/svg+xml; charset=utf-8'):
type: The type of the media type (always lower case). Example: 'image'
subtype: The subtype of the media type (always lower case). Example: 'svg'
suffix: The suffix of the media type (always lower case). Example: 'xml'
parameters: An object of the parameters in the media type (name of parameter always lower case). Example: {charset: 'utf-8'}
Parse the content-type header from the given req. Short-cut for typer.parse(req.headers['content-type']).
Parse the content-type header set on the given res. Short-cut for typer.parse(res.getHeader('content-type')).
Format an object into a media type string. This will return a string of the mime type for the given object. For the properties of the object, see the documentation for typer.parse(string).
Create an object path from a list or array of strings.
Install with npm
var toPath = require('to-object-path');
toPath('foo', 'bar', 'baz');
toPath('foo', ['bar', 'baz']);
//=> 'foo.bar.baz'Also supports passing an arguments object (without having to slice args):
function foo()
return toPath(arguments);
}
foo('foo', 'bar', 'baz');
foo('foo', ['bar', 'baz']);
//=> 'foo.bar.baz'Visit the example to see how this could be used in an application.
a.b.c) to get a nested value from an object. | homepage'a.b.c') paths. | homepageInstall dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on October 28, 2015. # define-property
Define a non-enumerable property on an object.
Install with npm
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on August 31, 2015.
Define a non-enumerable property on an object.
Install with npm
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on August 31, 2015.
Define a non-enumerable property on an object.
Install with npm
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on August 31, 2015.
Define a non-enumerable property on an object.
Install with npm
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on August 31, 2015.
Define a non-enumerable property on an object.
Install with npm
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on August 31, 2015.
Test if an object is a Stream
The missing Stream.isStream(obj): determine if an object is standard Node.js Stream. Works for Node-core Stream objects (for 0.8, 0.10, 0.11, and in theory, older and newer versions) and all versions of readable-stream.
var isStream = require('isstream')
var Stream = require('stream')
isStream(new Stream()) // true
isStream({}) // false
isStream(new Stream.Readable()) // true
isStream(new Stream.Writable()) // true
isStream(new Stream.Duplex()) // true
isStream(new Stream.Transform()) // true
isStream(new Stream.PassThrough()) // trueYou can also test for isReadable(obj), isWritable(obj) and isDuplex(obj) to test for implementations of Streams2 (and Streams3) base classes.
var isReadable = require('isstream').isReadable
var isWritable = require('isstream').isWritable
var isDuplex = require('isstream').isDuplex
var Stream = require('stream')
isReadable(new Stream()) // false
isWritable(new Stream()) // false
isDuplex(new Stream()) // false
isReadable(new Stream.Readable()) // true
isReadable(new Stream.Writable()) // false
isReadable(new Stream.Duplex()) // true
isReadable(new Stream.Transform()) // true
isReadable(new Stream.PassThrough()) // true
isWritable(new Stream.Readable()) // false
isWritable(new Stream.Writable()) // true
isWritable(new Stream.Duplex()) // true
isWritable(new Stream.Transform()) // true
isWritable(new Stream.PassThrough()) // true
isDuplex(new Stream.Readable()) // false
isDuplex(new Stream.Writable()) // false
isDuplex(new Stream.Duplex()) // true
isDuplex(new Stream.Transform()) // true
isDuplex(new Stream.PassThrough()) // trueReminder: when implementing your own streams, please use readable-stream rather than core streams.
Esprima can be used to perform lexical analysis (tokenization) or syntactic analysis (parsing) of a JavaScript program.
A simple example on Node.js REPL:
> var esprima = require('esprima');
> var program = 'const answer = 42';
> esprima.tokenize(program);
[ { type: 'Keyword', value: 'const' },
{ type: 'Identifier', value: 'answer' },
{ type: 'Punctuator', value: '=' },
{ type: 'Numeric', value: '42' } ]
> esprima.parseScript(program);
{ type: 'Program',
body:
[ { type: 'VariableDeclaration',
declarations: [Object],
kind: 'const' } ],
sourceType: 'script' }For more information, please read the complete documentation. #object-keys
An Object.keys shim. Invoke its “shim” method to shim Object.keys if it is unavailable.
Most common usage:
var keys = require('object-keys');
var assert = require('assert');
var obj = {
a: true,
b: true,
c: true
};
assert.deepEqual(keys(obj), ['a', 'b', 'c']);var keys = require('object-keys');
var assert = require('assert');
/* when Object.keys is not present */
delete Object.keys;
var shimmedKeys = keys.shim();
assert.equal(shimmedKeys, keys);
assert.deepEqual(Object.keys(obj), keys(obj));var keys = require('object-keys');
var assert = require('assert');
/* when Object.keys is present */
var shimmedKeys = keys.shim();
assert.equal(shimmedKeys, Object.keys);
assert.deepEqual(Object.keys(obj), keys(obj));Implementation taken directly from es5-shim, with modifications, including from lodash.
Simply clone the repo, npm install, and run npm test
Webpack-literate module resolution plugin for eslint-plugin-import.
Published separately to allow pegging to a specific version in case of breaking changes.
To use with eslint-plugin-import, run:
npm i eslint-import-resolver-webpack -g
or if you manage ESLint as a dev dependency:
# inside your project's working tree
npm install eslint-import-resolver-webpack --save-dev
Will look for webpack.config.js as a sibling of the first ancestral package.json, or a config parameter may be provided with another filename/path either relative to the package.json, or a complete, absolute path.
If multiple webpack configurations are found the first configuration containing a resolve section will be used. Optionally, the config-index (zero-based) setting can be used to select a specific configuration.
or with explicit config file name:
or with explicit config file index:
---
settings:
import/resolver:
webpack:
config: 'webpack.multiple.config.js'
config-index: 1 # take the config at index 1or with explicit config file path relative to your projects’s working directory:
or with explicit config object:
If your config relies on environment variables, they can be specified using the env parameter. If your config is a function, it will be invoked with the value assigned to env:
---
settings:
import/resolver:
webpack:
config: 'webpack.config.js'
env:
NODE_ENV: 'local'
production: trueGet supported eslint-import-resolver-webpack with the Tidelift Subscription
ASN.1 DER Encoder/Decoder and DSL.
Define model:
var asn = require('asn1.js');
var Human = asn.define('Human', function() {
this.seq().obj(
this.key('firstName').octstr(),
this.key('lastName').octstr(),
this.key('age').int(),
this.key('gender').enum({ 0: 'male', 1: 'female' }),
this.key('bio').seqof(Bio)
);
});
var Bio = asn.define('Bio', function() {
this.seq().obj(
this.key('time').gentime(),
this.key('description').octstr()
);
});Encode data:
var output = Human.encode({
firstName: 'Thomas',
lastName: 'Anderson',
age: 28,
gender: 'male',
bio: [
{
time: +new Date('31 March 1999'),
description: 'freedom of mind'
}
]
}, 'der');Decode data:
var human = Human.decode(output, 'der');
console.log(human);
/*
{ firstName: <Buffer 54 68 6f 6d 61 73>,
lastName: <Buffer 41 6e 64 65 72 73 6f 6e>,
age: 28,
gender: 'male',
bio:
[ { time: 922820400000,
description: <Buffer 66 72 65 65 64 6f 6d 20 6f 66 20 6d 69 6e 64> } ] }
*/Its possible to parse data without stopping on first error. In order to do it, you should call:
var human = Human.decode(output, 'der', { partial: true });
console.log(human);
/*
{ result: { ... },
errors: [ ... ] }
*/This package consists of two major parts: utilities and typeguard functions. By importing the project you will get both of them.
import * as utils from "tsutils";
utils.isIdentifier(node); // typeguard
utils.getLineRanges(sourceFile); // utilitiesIf you don’t need everything offered by this package, you can select what should be imported. The parts that are not imported are never read from disk and may save some startup time and reduce memory consumtion.
If you only need typeguards you can explicitly import them:
import { isIdentifier } from "tsutils/typeguard";
// You can even distiguish between typeguards for nodes and types
import { isUnionTypeNode } from "tsutils/typeguard/node";
import { isUnionType } from "tsutils/typeguard/type";If you only need the utilities you can also explicitly import them:
This package is backwards compatible with typescript 2.8.0 at runtime although compiling might need a newer version of typescript installed.
Using typescript@next might work, but it’s not officially supported. If you encounter any bugs, please open an issue.
For compatibility with older versions of TypeScript typeguard functions are separated by TypeScript version. If you are stuck on typescript@2.8, you should import directly from the submodule for that version:
// all typeguards compatible with typescript@2.8
import { isIdentifier } from "tsutils/typeguard/2.8";
// you can even use nested submodules
import { isIdentifier } from "tsutils/typeguard/2.8/node";
// all typeguards compatible with typescript@2.9 (includes those of 2.8)
import { isIdentifier } from "tsutils/typeguard/2.9";
// always points to the latest stable version (2.9 as of writing this)
import { isIdentifier } from "tsutils/typeguard";
import { isIdentifier } from "tsutils";
// always points to the typeguards for the next TypeScript version (3.0 as of writing this)
import { isIdentifier } from "tsutils/typeguard/next";Note that if you are also using utility functions, you should prefer the relevant submodule:
// importing directly from 'tsutils' would pull in the latest typeguards
import { forEachToken } from 'tsutils/util';
import { isIdentifier } from 'tsutils/typeguard/2.8';
Node.js’s util module for all engines.
This implements the Node.js util module for environments that do not have it, like browsers.
You usually do not have to install util yourself. If your code runs in Node.js, util is built in. If your code runs in the browser, bundlers like browserify or webpack also include the util module.
But if none of those apply, with npm do:
npm install util
var util = require('util')
var EventEmitter = require('events')
function MyClass() { EventEmitter.call(this) }
util.inherits(MyClass, EventEmitter)The util module uses ES5 features. If you need to support very old browsers like IE8, use a shim like es5-shim. You need both the shim and the sham versions of es5-shim.
To use util.promisify and util.callbackify, Promises must already be available. If you need to support browsers like IE11 that do not support Promises, use a shim. es6-promise is a popular one but there are many others available on npm.
See the Node.js util docs. util currently supports the Node 8 LTS API. However, some of the methods are outdated. The inspect and format methods included in this module are a lot more simple and barebones than the ones in Node.js.
PRs are very welcome! The main way to contribute to util is by porting features, bugfixes and tests from Node.js. Ideally, code contributions to this module are copy-pasted from Node.js and transpiled to ES5, rather than reimplemented from scratch. Matching the Node.js code as closely as possible makes maintenance simpler when new changes land in Node.js. This module intends to provide exactly the same API as Node.js, so features that are not available in the core util module will not be accepted. Feature requests should instead be directed at nodejs/node and will be added to this module once they are implemented in Node.js.
If there is a difference in behaviour between Node.js’s util module and this module, please open an issue!
http.Agent implementation for HTTPThis module provides an http.Agent implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in http module.
Note: For HTTP proxy usage with the https module, check out node-https-proxy-agent.
Install with npm:
var url = require('url');
var http = require('http');
var HttpProxyAgent = require('http-proxy-agent');
// HTTP/HTTPS proxy to connect to
var proxy = process.env.http_proxy || 'http://168.63.76.32:3128';
console.log('using proxy server %j', proxy);
// HTTP endpoint for the proxy to connect to
var endpoint = process.argv[2] || 'http://nodejs.org/api/';
console.log('attempting to GET %j', endpoint);
var opts = url.parse(endpoint);
// create an instance of the `HttpProxyAgent` class with the proxy server information
var agent = new HttpProxyAgent(proxy);
opts.agent = agent;
http.get(opts, function (res) {
console.log('"response" event!', res.headers);
res.pipe(process.stdout);
});Returns true if a value is any of the object types: array, regexp, plain object, function or date. This is useful for determining if a value can be extended, e.g. “can the value have keys?”
Install with npm
Returns true if the value is any of the following:
arrayregexpplain objectfunctiondateerrorAll objects in JavaScript can have keys, but it’s a pain to check for this, since we ether need to verify that the value is not null or undefined and:
object, functionAlso note that an extendable object is not the same as an extensible object, which is one that (in es6) is not sealed, frozen, or marked as non-extensible using preventExtensions.
Object constructor.Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue
Jon Schlinkert
This file was generated by verb-cli on July 04, 2015. # gcp-metadata > Get the metadata from a Google Cloud Platform environment.
const data = await gcpMetadata.instance('hostname');
console.log(data); // ...Instance hostname
const projectId = await gcpMetadata.project('project-id');
console.log(projectId); // ...Project ID of the running instanceconst data = await gcpMetadata.instance('service-accounts/default/email');
console.log(data); // ...Email address of the Compute identity service accountconst data = await gcpMetadata.instance({
property: 'tags',
params: { alt: 'text' }
});
console.log(data) // ...Tags as newline-delimited listIn some cases number valued properties returned by the Metadata Service may be too large to be representable as JavaScript numbers. In such cases we return those values as BigNumber objects (from the bignumber.js library). Numbers that fit within the JavaScript number range will be returned as normal number values.
const id = await gcpMetadata.instance('id');
console.log(id) // ... BigNumber { s: 1, e: 18, c: [ 45200, 31799277581759 ] }
console.log(id.toString()) // ... 4520031799277581759For example:
export GCE_METADATA_HOST = '169.254.169.254'
Regular expression for testing if a file path is a windows UNC file path. Can also be used as a component of another regexp via the
.sourceproperty.
Visit the MSDN reference for Common Data Types 2.2.57 UNC for more information about UNC paths.
Install with npm
true
Returns true for windows UNC paths:
regex.test('\\/foo/bar');
regex.test('\\\\foo/bar');
regex.test('\\\\foo\\admin$');
regex.test('\\\\foo\\admin$\\system32');
regex.test('\\\\foo\\temp');
regex.test('\\\\/foo/bar');
regex.test('\\\\\\/foo/bar');false
Returns false for non-UNC paths:
regex.test('/foo/bar');
regex.test('/');
regex.test('/foo');
regex.test('/foo/');
regex.test('c:');
regex.test('c:.');
regex.test('c:./');
regex.test('c:./file');
regex.test('c:/');
regex.test('c:/file');.git/true if the given string looks like a glob pattern.Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue
Jon Schlinkert
This file was generated by verb-cli on July 07, 2015. # EE First
Get the first event in a set of event emitters and event pairs, then clean up after itself.
Invoke listener on the first event from the list specified in arr. arr is an array of arrays, with each array in the format [ee, ...event]. listener will be called only once, the first time any of the given events are emitted. If error is one of the listened events, then if that fires first, the listener will be given the err argument.
The listener is invoked as listener(err, ee, event, args), where err is the first argument emitted from an error event, if applicable; ee is the event emitter that fired; event is the string event name that fired; and args is an array of the arguments that were emitted on the event.
var ee1 = new EventEmitter()
var ee2 = new EventEmitter()
first([
[ee1, 'close', 'end', 'error'],
[ee2, 'error']
], function (err, ee, event, args) {
// listener invoked
})The group of listeners can be cancelled before being invoked and have all the event listeners removed from the underlying event emitters.
var thunk = first([
[ee1, 'close', 'end', 'error'],
[ee2, 'error']
], function (err, ee, event, args) {
// listener invoked
})
// cancel and clean up
thunk.cancel()An ES2019 spec-compliant Array.prototype.flat shim/polyfill/replacement that works as far down as ES3.
This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the proposed spec.
Because Array.prototype.flat depends on a receiver (the this value), the main export takes the array to operate on as the first argument.
var flat = require('array.prototype.flat');
var assert = require('assert');
var arr = [1, [2], [], 3, [[4]]];
assert.deepEqual(flat(arr, 1), [1, 2, 3, [4]]);var flat = require('array.prototype.flat');
var assert = require('assert');
/* when Array#flat is not present */
delete Array.prototype.flat;
var shimmedFlat = flat.shim();
assert.equal(shimmedFlat, flat.getPolyfill());
assert.deepEqual(arr.flat(), flat(arr));var flat = require('array.prototype.flat');
var assert = require('assert');
/* when Array#flat is present */
var shimmedIncludes = flat.shim();
var mapper = function (x) { return [x, 1]; };
assert.equal(shimmedIncludes, Array.prototype.flat);
assert.deepEqual(arr.flat(mapper), flat(arr, mapper));Simply clone the repo, npm install, and run npm test
Get the status of a file with some features.
Wrapper over standard methods (fs.lstat, fs.stat) with some features.
npm install @nodelib/fs.stat
const fsStat = require('@nodelib/fs.stat');
fsStat.stat('path').then((stat) => {
console.log(stat); // => fs.Stats
});Returns a Promise<fs.Stats> for provided path.
Returns a fs.Stats for provided path.
Returns a fs.Stats for provided path with standard callback-style.
string | Buffer | URLThe path argument for fs.lstat or fs.stat method.
ObjectSee options section for more detailed information.
booleantrueThrow an error or return information about symlink, when symlink is broken. When false, methods will be return lstat call for broken symlinks.
booleantrueBy default, the methods of this package follows symlinks. If you do not want it, set this option to false or use the standard method fs.lstat.
FileSystemAdapterbuilt-in FS methodsBy default, the built-in Node.js module (fs) is used to work with the file system. You can replace each method with your own.
interface FileSystemAdapter {
lstat?: typeof fs.lstat;
stat?: typeof fs.stat;
lstatSync?: typeof fs.lstatSync;
statSync?: typeof fs.statSync;
}See the Releases section of our GitHub project for changelogs for each release version.
emoji-regex offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard.
This repository contains a script that generates this regular expression based on the data from Unicode v12. Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard.
Via npm:
In Node.js:
const emojiRegex = require('emoji-regex');
// Note: because the regular expression has the global flag set, this module
// exports a function that returns the regex rather than exporting the regular
// expression itself, to make it impossible to (accidentally) mutate the
// original regular expression.
const text = `
\u{231A}: ⌚ default emoji presentation character (Emoji_Presentation)
\u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji
\u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base)
\u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier
`;
const regex = emojiRegex();
let match;
while (match = regex.exec(text)) {
const emoji = match[0];
console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`);
}Console output:
Matched sequence ⌚ — code points: 1
Matched sequence ⌚ — code points: 1
Matched sequence ↔️ — code points: 2
Matched sequence ↔️ — code points: 2
Matched sequence 👩 — code points: 1
Matched sequence 👩 — code points: 1
Matched sequence 👩🏿 — code points: 2
Matched sequence 👩🏿 — code points: 2
To match emoji in their textual representation as well (i.e. emoji that are not Emoji_Presentation symbols and that aren’t forced to render as emoji by a variation selector), require the other regex:
Additionally, in environments which support ES2015 Unicode escapes, you may require ES2015-style versions of the regexes:
const emojiRegex = require('emoji-regex/es2015/index.js');
const emojiRegexText = require('emoji-regex/es2015/text.js');| Mathias Bynens |
Traverse JSON Schema passing each schema object to callback
npm install json-schema-traverse
const traverse = require('json-schema-traverse');
const schema = {
properties: {
foo: {type: 'string'},
bar: {type: 'integer'}
}
};
traverse(schema, {cb});
// cb is called 3 times with:
// 1. root schema
// 2. {type: 'string'}
// 3. {type: 'integer'}
// Or:
traverse(schema, {cb: {pre, post}});
// pre is called 3 times with:
// 1. root schema
// 2. {type: 'string'}
// 3. {type: 'integer'}
//
// post is called 3 times with:
// 1. {type: 'string'}
// 2. {type: 'integer'}
// 3. root schemaCallback function cb is called for each schema object (not including draft-06 boolean schemas), including the root schema, in pre-order traversal. Schema references ($ref) are not resolved, they are passed as is. Alternatively, you can pass a {pre, post} object as cb, and then pre will be called before traversing child elements, and post will be called after all child elements have been traversed.
Callback is passed these parameters:
traverse objectproperties, anyOf, etc.){type: 'string'} is the root schema{type: 'string'} the property name is 'foo'const traverse = require('json-schema-traverse');
const schema = {
mySchema: {
minimum: 1,
maximum: 2
}
};
traverse(schema, {allKeys: true, cb});
// cb is called 2 times with:
// 1. root schema
// 2. mySchemaWithout option allKeys: true callback will be called only with root schema.
Turn a writeable and readable stream into a single streams2 duplex stream.
Similar to duplexer2 except it supports both streams2 and streams1 as input and it allows you to set the readable and writable part asynchronously using setReadable(stream) and setWritable(stream)
npm install duplexify
Use duplexify(writable, readable, streamOptions) (or duplexify.obj(writable, readable) to create an object stream)
var duplexify = require('duplexify')
// turn writableStream and readableStream into a single duplex stream
var dup = duplexify(writableStream, readableStream)
dup.write('hello world') // will write to writableStream
dup.on('data', function(data) {
// will read from readableStream
})You can also set the readable and writable parts asynchronously
var dup = duplexify()
dup.write('hello world') // write will buffer until the writable
// part has been set
// wait a bit ...
dup.setReadable(readableStream)
// maybe wait some more?
dup.setWritable(writableStream)If you call setReadable or setWritable multiple times it will unregister the previous readable/writable stream. To disable the readable or writable part call setReadable or setWritable with null.
If the readable or writable streams emits an error or close it will destroy both streams and bubble up the event. You can also explicitly destroy the streams by calling dup.destroy(). The destroy method optionally takes an error object as argument, in which case the error is emitted as part of the error event.
dup.on('error', function(err) {
console.log('readable or writable emitted an error - close will follow')
})
dup.on('close', function() {
console.log('the duplex stream is destroyed')
})
dup.destroy() // calls destroy on the readable and writable part (if present)Turn a node core http request into a duplex stream is as easy as
var duplexify = require('duplexify')
var http = require('http')
var request = function(opts) {
var req = http.request(opts)
var dup = duplexify(req)
req.on('response', function(res) {
dup.setReadable(res)
})
return dup
}
var req = request({
method: 'GET',
host: 'www.google.com',
port: 80
})
req.end()
req.pipe(process.stdout)duplexify is part of the mississippi stream utility collection which includes more useful stream modules similar to this one.
emoji-regex offers a regular expression to match all emoji symbols (including textual representations of emoji) as per the Unicode Standard.
This repository contains a script that generates this regular expression based on the data from Unicode Technical Report #51. Because of this, the regular expression can easily be updated whenever new emoji are added to the Unicode standard.
Via npm:
In Node.js:
const emojiRegex = require('emoji-regex');
// Note: because the regular expression has the global flag set, this module
// exports a function that returns the regex rather than exporting the regular
// expression itself, to make it impossible to (accidentally) mutate the
// original regular expression.
const text = `
\u{231A}: ⌚ default emoji presentation character (Emoji_Presentation)
\u{2194}\u{FE0F}: ↔️ default text presentation character rendered as emoji
\u{1F469}: 👩 emoji modifier base (Emoji_Modifier_Base)
\u{1F469}\u{1F3FF}: 👩🏿 emoji modifier base followed by a modifier
`;
const regex = emojiRegex();
let match;
while (match = regex.exec(text)) {
const emoji = match[0];
console.log(`Matched sequence ${ emoji } — code points: ${ [...emoji].length }`);
}Console output:
Matched sequence ⌚ — code points: 1
Matched sequence ⌚ — code points: 1
Matched sequence ↔️ — code points: 2
Matched sequence ↔️ — code points: 2
Matched sequence 👩 — code points: 1
Matched sequence 👩 — code points: 1
Matched sequence 👩🏿 — code points: 2
Matched sequence 👩🏿 — code points: 2
To match emoji in their textual representation as well (i.e. emoji that are not Emoji_Presentation symbols and that aren’t forced to render as emoji by a variation selector), require the other regex:
Additionally, in environments which support ES2015 Unicode escapes, you may require ES2015-style versions of the regexes:
const emojiRegex = require('emoji-regex/es2015/index.js');
const emojiRegexText = require('emoji-regex/es2015/text.js');| Mathias Bynens |
Like duplexer2 but using Streams3 without readable-stream dependency
var stream = require("stream");
var duplexer3 = require("duplexer3");
var writable = new stream.Writable({objectMode: true}),
readable = new stream.Readable({objectMode: true});
writable._write = function _write(input, encoding, done) {
if (readable.push(input)) {
return done();
} else {
readable.once("drain", done);
}
};
readable._read = function _read(n) {
// no-op
};
// simulate the readable thing closing after a bit
writable.once("finish", function() {
setTimeout(function() {
readable.push(null);
}, 500);
});
var duplex = duplexer3(writable, readable);
duplex.on("data", function(e) {
console.log("got data", JSON.stringify(e));
});
duplex.on("finish", function() {
console.log("got finish event");
});
duplex.on("end", function() {
console.log("got end event");
});
duplex.write("oh, hi there", function() {
console.log("finished writing");
});
duplex.end(function() {
console.log("finished ending");
});got data "oh, hi there"
finished writing
got finish event
finished ending
got end event
This is a reimplementation of duplexer using the Streams3 API which is standard in Node as of v4. Everything largely works the same.
npm i duplexer3
Creates a new DuplexWrapper object, which is the actual class that implements most of the fun stuff. All that fun stuff is hidden. DON’T LOOK.
Arguments
stream.Duplex options, as well as the properties described below.Options
true.3-clause BSD. A copy is included with the source.
Manipulate the HTTP Vary header
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Adds the given header field to the Vary response header of res. This can be a string of a single field, a string of a valid Vary header, or an array of multiple fields.
This will append the header if not already listed, otherwise leaves it listed in the current location.
Adds the given header field to the Vary response header string header. This can be a string of a single field, a string of a valid Vary header, or an array of multiple fields.
This will append the header if not already listed, otherwise leaves it listed in the current location. The new header string is returned.
// Get header string appending "Origin" to "Accept, User-Agent"
vary.append('Accept, User-Agent', 'Origin')var http = require('http')
var vary = require('vary')
http.createServer(function onRequest (req, res) {
// about to user-agent sniff
vary(res, 'User-Agent')
var ua = req.headers['user-agent'] || ''
var isMobile = /mobi|android|touch|mini/i.test(ua)
// serve site, depending on isMobile
res.setHeader('Content-Type', 'text/html')
res.end('You are (probably) ' + (isMobile ? '' : 'not ') + 'a mobile user')
})An ini format parser and serializer for node.
Sections are treated as nested objects. Items before the first heading are saved on the object directly.
Consider an ini-file config.ini that looks like this:
; this comment is being ignored
scope = global
[database]
user = dbuser
password = dbpassword
database = use_this_database
[paths.default]
datadir = /var/lib/data
array[] = first value
array[] = second value
array[] = third value
You can read, manipulate and write the ini-file like so:
var fs = require('fs')
, ini = require('ini')
var config = ini.parse(fs.readFileSync('./config.ini', 'utf-8'))
config.scope = 'local'
config.database.database = 'use_another_database'
config.paths.default.tmpdir = '/tmp'
delete config.paths.default.datadir
config.paths.default.array.push('fourth value')
fs.writeFileSync('./config_modified.ini', ini.stringify(config, { section: 'section' }))
This will result in a file called config_modified.ini being written to the filesystem with the following content:
[section]
scope=local
[section.database]
user=dbuser
password=dbpassword
database=use_another_database
[section.paths.default]
tmpdir=/tmp
array[]=first value
array[]=second value
array[]=third value
array[]=fourth value
Decode the ini-style formatted inistring into a nested object.
Alias for decode(inistring)
Encode the object object into an ini-style formatted string. If the optional parameter section is given, then all top-level properties of the object are put into this section and the section-string is prepended to all sub-sections, see the usage example above.
The options object may contain the following:
section A string which will be the first section in the encoded ini data. Defaults to none.whitespace Boolean to specify whether to put whitespace around the = character. By default, whitespace is omitted, to be friendly to some persnickety old parsers that don’t tolerate it well. But some find that it’s more human-readable and pretty with the whitespace.For backwards compatibility reasons, if a string options is passed in, then it is assumed to be the section value.
Alias for encode(object, [options])
Escapes the string val such that it is safe to be used as a key or value in an ini-file. Basically escapes quotes. For example
ini.safe('"unsafe string"')
would result in
"\"unsafe string\""
Unescapes the string val
npm install --save @types/node
This package contains type definitions for Node.js (http://nodejs.org/).
Files were exported from https://github.com/DefinitelyTyped/DefinitelyTyped/tree/master/types/node.
Buffer, __dirname, __filename, clearImmediate, clearInterval, clearTimeout, console, exports, global, module, process, queueMicrotask, require, setImmediate, setInterval, setTimeoutThese definitions were written by Microsoft TypeScript, DefinitelyTyped, Alberto Schiabel, Alexander T., Alvis HT Tang, Andrew Makarov, Benjamin Toueg, Bruno Scheufler, Chigozirim C., David Junger, Deividas Bakanas, Eugene Y. Q. Shen, Flarna, Hannes Magnusson, Hoàng Văn Khải, Huw, Kelvin Jin, Klaus Meinhardt, Lishude, Mariusz Wiktorczyk, Mohsen Azimi, Nicolas Even, Nikita Galkin, Parambir Singh, Sebastian Silbermann, Simon Schick, Thomas den Hollander, Wilco Bakker, wwwy3y3, Samuel Ainsworth, Kyle Uehlein, Jordi Oliveras Rovira, Thanik Bhongbhibhat, Marcin Kopacz, Trivikram Kamat, Minh Son Nguyen, Junxiao Shi, Ilia Baryshnikov, ExE Boss, Surasak Chaisurin, Piotr Błażejewicz, Anna Henningsen, Jason Kwok, and Victor Perin.
Define multiple non-enumerable properties at once. Uses Object.defineProperty when available; falls back to standard assignment in older engines. Existing properties are not overridden. Accepts a map of property names to a predicate that, when true, force-overrides.
var define = require('define-properties');
var assert = require('assert');
var obj = define({ a: 1, b: 2 }, {
a: 10,
b: 20,
c: 30
});
assert(obj.a === 1);
assert(obj.b === 2);
assert(obj.c === 30);
if (define.supportsDescriptors) {
assert.deepEqual(Object.keys(obj), ['a', 'b']);
assert.deepEqual(Object.getOwnPropertyDescriptor(obj, 'c'), {
configurable: true,
enumerable: false,
value: 30,
writable: false
});
}Then, with predicates:
var define = require('define-properties');
var assert = require('assert');
var obj = define({ a: 1, b: 2, c: 3 }, {
a: 10,
b: 20,
c: 30
}, {
a: function () { return false; },
b: function () { return true; }
});
assert(obj.a === 1);
assert(obj.b === 20);
assert(obj.c === 3);
if (define.supportsDescriptors) {
assert.deepEqual(Object.keys(obj), ['a', 'c']);
assert.deepEqual(Object.getOwnPropertyDescriptor(obj, 'b'), {
configurable: true,
enumerable: false,
value: 20,
writable: false
});
}Simply clone the repo, npm install, and run npm test
Assign the enumerable es6 Symbol properties from an object (or objects) to the first object passed on the arguments. Can be used as a supplement to other extend, assign or merge methods as a polyfill for the Symbols part of the es6 Object.assign method.
From the Mozilla Developer docs for Symbol:
A symbol is a unique and immutable data type and may be used as an identifier for object properties. The symbol object is an implicit object wrapper for the symbol primitive data type.
Install with npm
var assignSymbols = require('assign-symbols');
var obj = {};
var one = {};
var symbolOne = Symbol('aaa');
one[symbolOne] = 'bbb';
var two = {};
var symbolTwo = Symbol('ccc');
two[symbolTwo] = 'ddd';
assignSymbols(obj, one, two);
console.log(obj[symbolOne]);
//=> 'bbb'
console.log(obj[symbolTwo]);
//=> 'ddd'Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb-cli on November 06, 2015. # flat-cache > A stupidly simple key/value storage using files to persist the data
var flatCache = require('flat-cache')
// loads the cache, if one does not exists for the given
// Id a new one will be prepared to be created
var cache = flatCache.load('cacheId');
// sets a key on the cache
cache.setKey('key', { foo: 'var' });
// get a key from the cache
cache.getKey('key') // { foo: 'var' }
// fetch the entire persisted object
cache.all() // { 'key': { foo: 'var' } }
// remove a key
cache.removeKey('key'); // removes a key from the cache
// save it to disk
cache.save(); // very important, if you don't save no changes will be persisted.
// cache.save( true /* noPrune */) // can be used to prevent the removal of non visited keys
// loads the cache from a given directory, if one does
// not exists for the given Id a new one will be prepared to be created
var cache = flatCache.load('cacheId', path.resolve('./path/to/folder'));
// The following methods are useful to clear the cache
// delete a given cache
flatCache.clearCacheById('cacheId') // removes the cacheId document if one exists.
// delete all cache
flatCache.clearAll(); // remove the cache directoryI needed a super simple and dumb in-memory cache with optional disk persistance in order to make a script that will beutify files with esformatter only execute on the files that were changed since the last run. To make that possible we need to store the fileSize and modificationTime of the files. So a simple key/value storage was needed and Bam! this module was born.
load method is called, a folder named .cache will be created inside the module directory when cache.save is called. If you’re committing your node_modules to any vcs, you might want to ignore the default .cache folder, or specify a custom directory.stringify-able ones, meaning no circular referencesObject.observe to deliver the changes to disk, but I wanted to keep this module intentionally dumb and simplecache.save() is called. If this is not desired, you can pass true to the save call like: cache.save( true /* noPrune */ ).Constants and utilities about visitor keys to traverse AST.
Use npm to install.
type:
{ [type: string]: string[] | undefined }
Visitor keys. This keys are frozen.
This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.
For example:
console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]
type:
(node: object) => string[]
Get the visitor keys of a given AST node.
This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.
This will be used to traverse unknown nodes.
For example:
const node = {
type: "AssignmentExpression",
left: { type: "Identifier", name: "foo" },
right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]
type:
(additionalKeys: object) => { [type: string]: string[] | undefined }
Make the union set with evk.KEYS and the given keys.
additionalKeys is at first, then evk.KEYS is concatenated after that.For example:
console.log(evk.unionWith({
MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }
See GitHub releases.
Welcome. See ESLint contribution guidelines.
npm test runs tests and measures code coverage.npm run lint checks source codes with ESLint.npm run coverage opens the code coverage report of the previous test with your default browser.npm run release publishes this package to npm registory.Constants and utilities about visitor keys to traverse AST.
Use npm to install.
type:
{ [type: string]: string[] | undefined }
Visitor keys. This keys are frozen.
This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.
For example:
console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]
type:
(node: object) => string[]
Get the visitor keys of a given AST node.
This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.
This will be used to traverse unknown nodes.
For example:
const node = {
type: "AssignmentExpression",
left: { type: "Identifier", name: "foo" },
right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]
type:
(additionalKeys: object) => { [type: string]: string[] | undefined }
Make the union set with evk.KEYS and the given keys.
additionalKeys is at first, then evk.KEYS is concatenated after that.For example:
console.log(evk.unionWith({
MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }
See GitHub releases.
Welcome. See ESLint contribution guidelines.
npm test runs tests and measures code coverage.npm run lint checks source codes with ESLint.npm run coverage opens the code coverage report of the previous test with your default browser.npm run release publishes this package to npm registory.Constants and utilities about visitor keys to traverse AST.
Use npm to install.
type:
{ [type: string]: string[] | undefined }
Visitor keys. This keys are frozen.
This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.
For example:
console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]
type:
(node: object) => string[]
Get the visitor keys of a given AST node.
This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.
This will be used to traverse unknown nodes.
For example:
const node = {
type: "AssignmentExpression",
left: { type: "Identifier", name: "foo" },
right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]
type:
(additionalKeys: object) => { [type: string]: string[] | undefined }
Make the union set with evk.KEYS and the given keys.
additionalKeys is at first, then evk.KEYS is concatenated after that.For example:
console.log(evk.unionWith({
MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }
See GitHub releases.
Welcome. See ESLint contribution guidelines.
npm test runs tests and measures code coverage.npm run lint checks source codes with ESLint.npm run coverage opens the code coverage report of the previous test with your default browser.npm run release publishes this package to npm registory.Constants and utilities about visitor keys to traverse AST.
Use npm to install.
type:
{ [type: string]: string[] | undefined }
Visitor keys. This keys are frozen.
This is an object. Keys are the type of ESTree nodes. Their values are an array of property names which have child nodes.
For example:
console.log(evk.KEYS.AssignmentExpression) // → ["left", "right"]
type:
(node: object) => string[]
Get the visitor keys of a given AST node.
This is similar to Object.keys(node) of ES Standard, but some keys are excluded: parent, leadingComments, trailingComments, and names which start with _.
This will be used to traverse unknown nodes.
For example:
const node = {
type: "AssignmentExpression",
left: { type: "Identifier", name: "foo" },
right: { type: "Literal", value: 0 }
}
console.log(evk.getKeys(node)) // → ["type", "left", "right"]
type:
(additionalKeys: object) => { [type: string]: string[] | undefined }
Make the union set with evk.KEYS and the given keys.
additionalKeys is at first, then evk.KEYS is concatenated after that.For example:
console.log(evk.unionWith({
MethodDefinition: ["decorators"]
})) // → { ..., MethodDefinition: ["decorators", "key", "value"], ... }
See GitHub releases.
Welcome. See ESLint contribution guidelines.
npm test runs tests and measures code coverage.npm run lint checks source codes with ESLint.npm run coverage opens the code coverage report of the previous test with your default browser.npm run release publishes this package to npm registory.Create and parse HTTP Content-Type header according to RFC 7231
Parse a content type string. This will return an object with the following properties (examples are shown for the string 'image/svg+xml; charset=utf-8'):
type: The media type (the type and subtype, always lower case). Example: 'image/svg+xml'
parameters: An object of the parameters in the media type (name of parameter always lower case). Example: {charset: 'utf-8'}
Throws a TypeError if the string is missing or invalid.
Parse the content-type header from the given req. Short-cut for contentType.parse(req.headers['content-type']).
Throws a TypeError if the Content-Type header is missing or invalid.
Parse the content-type header set on the given res. Short-cut for contentType.parse(res.getHeader('content-type')).
Throws a TypeError if the Content-Type header is missing or invalid.
Format an object into a content type string. This will return a string of the content type for the given object with the following properties (examples are shown that produce the string 'image/svg+xml; charset=utf-8'):
type: The media type (will be lower-cased). Example: 'image/svg+xml'
parameters: An object of the parameters in the media type (name of the parameter will be lower-cased). Example: {charset: 'utf-8'}
Throws a TypeError if the object contains an invalid type or parameter names.
Returns true if any values exist, false if empty. Works for booleans, functions, numbers, strings, nulls, objects and arrays.
Install with npm:
var hasValue = require('has-values');
hasValue('a');
//=> true
hasValue('');
//=> false
hasValue(1);
//=> true
hasValue(0);
//=> false
hasValue(0, true); // treat zero as a value
//=> true
hasValue({a: 'a'}});
//=> true
hasValue({}});
//=> false
hasValue(['a']);
//=> true
hasValue([]);
//=> false
hasValue(function(foo) {}); // function length/arity
//=> true
hasValue(function() {});
//=> false
hasValue(true);
hasValue(false);
//=> trueTo test for empty values, do:
You might also be interested in these projects:
Object constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Generate readme and API documentation with verb:
Or, if verb is installed globally:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb, v, on March 27, 2016. # color-convert
Color-convert is a color conversion library for JavaScript and node. It converts all ways between rgb, hsl, hsv, hwb, cmyk, ansi, ansi16, hex strings, and CSS keywords (will round to closest):
var convert = require('color-convert');
convert.rgb.hsl(140, 200, 100); // [96, 48, 59]
convert.keyword.rgb('blue'); // [0, 0, 255]
var rgbChannels = convert.rgb.channels; // 3
var cmykChannels = convert.cmyk.channels; // 4
var ansiChannels = convert.ansi16.channels; // 1npm install color-convert
Simply get the property of the from and to conversion that you’re looking for.
All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on .raw to the function.
All ‘from’ functions have a hidden property called .channels that indicates the number of channels the function expects (not including alpha).
var convert = require('color-convert');
// Hex to LAB
convert.hex.lab('DEADBF'); // [ 76, 21, -2 ]
convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ]
// RGB to CMYK
convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ]
convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ]All functions that accept multiple arguments also support passing an array.
Note that this does not apply to functions that convert from a color that only requires one value (e.g. keyword, ansi256, hex, etc.)
var convert = require('color-convert');
convert.rgb.hex(123, 45, 67); // '7B2D43'
convert.rgb.hex([123, 45, 67]); // '7B2D43'Conversions that don’t have an explicitly defined conversion (in conversions.js), but can be converted by means of sub-conversions (e.g. XYZ -> RGB -> CMYK), are automatically routed together. This allows just about any color model supported by color-convert to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> LAB -> XYZ -> RGB -> Hex).
Keep in mind that extensive conversions may result in a loss of precision, and exist only to be complete. For a list of “direct” (single-step) conversions, see conversions.js.
If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request.
Color-convert is a color conversion library for JavaScript and node. It converts all ways between rgb, hsl, hsv, hwb, cmyk, ansi, ansi16, hex strings, and CSS keywords (will round to closest):
var convert = require('color-convert');
convert.rgb.hsl(140, 200, 100); // [96, 48, 59]
convert.keyword.rgb('blue'); // [0, 0, 255]
var rgbChannels = convert.rgb.channels; // 3
var cmykChannels = convert.cmyk.channels; // 4
var ansiChannels = convert.ansi16.channels; // 1npm install color-convert
Simply get the property of the from and to conversion that you’re looking for.
All functions have a rounded and unrounded variant. By default, return values are rounded. To get the unrounded (raw) results, simply tack on .raw to the function.
All ‘from’ functions have a hidden property called .channels that indicates the number of channels the function expects (not including alpha).
var convert = require('color-convert');
// Hex to LAB
convert.hex.lab('DEADBF'); // [ 76, 21, -2 ]
convert.hex.lab.raw('DEADBF'); // [ 75.56213190997677, 20.653827952644754, -2.290532499330533 ]
// RGB to CMYK
convert.rgb.cmyk(167, 255, 4); // [ 35, 0, 98, 0 ]
convert.rgb.cmyk.raw(167, 255, 4); // [ 34.509803921568626, 0, 98.43137254901961, 0 ]All functions that accept multiple arguments also support passing an array.
Note that this does not apply to functions that convert from a color that only requires one value (e.g. keyword, ansi256, hex, etc.)
var convert = require('color-convert');
convert.rgb.hex(123, 45, 67); // '7B2D43'
convert.rgb.hex([123, 45, 67]); // '7B2D43'Conversions that don’t have an explicitly defined conversion (in conversions.js), but can be converted by means of sub-conversions (e.g. XYZ -> RGB -> CMYK), are automatically routed together. This allows just about any color model supported by color-convert to be converted to any other model, so long as a sub-conversion path exists. This is also true for conversions requiring more than one step in between (e.g. LCH -> LAB -> XYZ -> RGB -> Hex).
Keep in mind that extensive conversions may result in a loss of precision, and exist only to be complete. For a list of “direct” (single-step) conversions, see conversions.js.
If there is a new model you would like to support, or want to add a direct conversion between two existing models, please send us a pull request.
Create an array by repeating the given value n times.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
const repeat = require('repeat-element');
repeat('a', 5);
//=> ['a', 'a', 'a', 'a', 'a']
repeat('a', 1);
//=> ['a']
repeat('a', 0);
//=> []
repeat(null, 5)
//» [ null, null, null, null, null ]
repeat({some: 'object'}, 5)
//» [ { some: 'object' },
// { some: 'object' },
// { some: 'object' },
// { some: 'object' },
// { some: 'object' } ]
repeat(5, 5)
//» [ 5, 5, 5, 5, 5 ]Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
| Commits | Contributor |
|---|---|
| 17 | jonschlinkert |
| 3 | LinusU |
| 1 | architectcodes |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on August 19, 2018. # ansi-align
align-text with ANSI support for CLIs
Easily center- or right- align a block of text, carefully ignoring ANSI escape codes.
E.g. turn this:

Into this:

ansiAlign(text, [opts])Align the given text per the line with the greatest string-width, returning a new string (or array).
text: required, string or array
The text to align. If a string is given, it will be split using either the opts.split value or '\n' by default. If an array is given, a different array of modified strings will be returned.
opts: optional, object
Options to change behavior, see below.
opts.align: string, default 'center'
The alignment mode. Use 'center' for center-alignment, 'right' for right-alignment, or 'left' for left-alignment. Note that the given text is assumed to be left-aligned already, so specifying align: 'left' just returns the text as is (no-op).
opts.split: string or RegExp, default '\n'
The separator to use when splitting the text. Only used if text is given as a string.
opts.pad: string, default ' '
The value used to left-pad (prepend to) lines of lesser width. Will be repeated as necessary to adjust alignment to the line with the greatest width.
ansiAlign.center(text)Alias for ansiAlign(text, { align: 'center' }).
ansiAlign.right(text)Alias for ansiAlign(text, { align: 'right' }).
ansiAlign.left(text)Alias for ansiAlign(text, { align: 'left' }), which is a no-op.
center-align: Very close to this package, except it doesn’t support ANSI codes.left-pad: Great for left-padding but does not support center alignment or ANSI codes.An ES7/ES2016 spec-compliant Array.prototype.includes shim/polyfill/replacement that works as far down as ES3.
This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the proposed spec.
Because Array.prototype.includes depends on a receiver (the this value), the main export takes the array to operate on as the first argument.
Basic usage: includes(array, value[, fromIndex=0])
var includes = require('array-includes');
var assert = require('assert');
var arr = [ 'one', 'two' ];
includes(arr, 'one'); // true
includes(arr, 'three'); // false
includes(arr, 'one', 1); // falsevar arr = [
1,
'foo',
NaN,
-0
];
assert.equal(arr.indexOf(0) > -1, true);
assert.equal(arr.indexOf(-0) > -1, true);
assert.equal(includes(arr, 0), true);
assert.equal(includes(arr, -0), true);
assert.equal(arr.indexOf(NaN) > -1, false);
assert.equal(includes(arr, NaN), true);
assert.equal(includes(arr, 'foo', 0), true);
assert.equal(includes(arr, 'foo', 1), true);
assert.equal(includes(arr, 'foo', 2), false);/* when Array#includes is not present */
delete Array.prototype.includes;
var shimmedIncludes = includes.shim();
assert.equal(shimmedIncludes, includes.getPolyfill());
assert.equal(arr.includes('foo', 1), includes(arr, 'foo', 1));/* when Array#includes is present */
var shimmedIncludes = includes.shim();
assert.equal(shimmedIncludes, Array.prototype.includes);
assert.equal(arr.includes(1, 'foo'), includes(arr, 1, 'foo'));Simply clone the repo, npm install, and run npm test
easily create complex multi-column command-line-interfaces.
const ui = require('cliui')()
ui.div('Usage: $0 [command] [options]')
ui.div({
text: 'Options:',
padding: [2, 0, 1, 0]
})
ui.div(
{
text: "-f, --file",
width: 20,
padding: [0, 4, 0, 4]
},
{
text: "the file to load." +
chalk.green("(if this description is long it wraps).")
,
width: 20
},
{
text: chalk.red("[required]"),
align: 'right'
}
)
console.log(ui.toString())As of v7 cliui supports Deno and ESM:
import cliui from "https://deno.land/x/cliui/deno.ts";
const ui = cliui({})
ui.div('Usage: $0 [command] [options]')
ui.div({
text: 'Options:',
padding: [2, 0, 1, 0]
})
ui.div({
text: "-f, --file",
width: 20,
padding: [0, 4, 0, 4]
})
console.log(ui.toString())
cliui exposes a simple layout DSL:
If you create a single ui.div, passing a string rather than an object:
\n: characters will be interpreted as new rows.\t: characters will be interpreted as new columns.\s: characters will be interpreted as padding.as an example…
var ui = require('./')({
width: 60
})
ui.div(
'Usage: node ./bin/foo.js\n' +
' <regex>\t provide a regex\n' +
' <glob>\t provide a glob\t [required]'
)
console.log(ui.toString())will output:
Usage: node ./bin/foo.js
<regex> provide a regex
<glob> provide a glob [required]
Specify the maximum width of the UI being generated. If no width is provided, cliui will try to get the current window’s width and use it, and if that doesn’t work, width will be set to 80.
Enable or disable the wrapping of text in a column.
Create a row with any number of columns, a column can either be a string, or an object with the following options:
right or center.[top, right, bottom, left].Similar to div, except the next row will be appended without a new line being created.
Resets the UI elements of the current cliui instance, maintaining the values set for width and wrap.
Node-core v8.11.1 streams for userland
Node-core streams for userland
This package is a mirror of the Streams2 and Streams3 implementations in Node-core.
Full documentation may be found on the Node.js website.
If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use readable-stream only and avoid the “stream” module in Node-core, for background see this blogpost.
As of version 2.0.0 readable-stream uses semantic versioning.
readable-stream is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:
readable-stream to be included in Node.js.Returns true if the value is an object and not an array or null.
Install with npm:
Use is-plain-object if you want only objects that are created by the Object constructor.
Install with npm:
Install with bower
True
All of the following return true:
isObject({});
isObject(Object.create({}));
isObject(Object.create(Object.prototype));
isObject(Object.create(null));
isObject({});
isObject(new Foo);
isObject(/foo/);False
All of the following return false:
isObject();
isObject(function () {});
isObject(1);
isObject([]);
isObject(undefined);
isObject(null);You might also be interested in these projects:
merge-deep: Recursively merge values in a javascript object. | homepage
Object constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Generate readme and API documentation with verb:
Or, if verb is installed globally:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb, v0.9.0, on April 25, 2016.
node-extend is a port of the classic extend() method from jQuery. It behaves as you expect. It is simple, tried and true.
Notes:
Object.assign now offers the same functionality natively (but without the “deep copy” option). See ECMAScript 2015 (ES6) in Node.js.Object.assign in both Node.js and many browsers (since NPM modules are for the browser too) may not be fully spec-compliant. Check object.assign module for a compliant candidate.This package is available on npm as: extend
Syntax: extend ( [deep], target, object1, [objectN] )
Extend one object with one or more others, returning the modified object.
Example:
Keep in mind that the target object will be modified, and will be returned from extend().
If a boolean true is specified as the first argument, extend performs a deep copy, recursively copying any objects it finds. Otherwise, the copy will share structure with the original object(s). Undefined properties are not copied. However, properties inherited from the object’s prototype will be copied over. Warning: passing false as the first argument is not supported.
deep Boolean (optional) If set, the merge becomes recursive (i.e. deep copy).target Object The object to extend.object1 Object The object that will be merged into the first.objectN Object (Optional) More objects to merge into the first.All credit to the jQuery authors for perfecting this amazing utility.
Ported to Node.js by Stefan Thomas with contributions by Jonathan Buchanan and Jordan Harband.
If you’d like to use native version when it exists and fallback to polyfill if it doesn’t, but without implementing Map on global scope, do:
If the global es6 Map exists or Multimap.Map is set, Multimap will use the Map as inner store, that means Object can be used as key.
var Multimap = require('multimap');
// if harmony is on
/* nothing need to do */
// or if you are using es6-shim
Multimap.Map = ShimMap;
var m = new Multimap();
var key = {};
m.set(key, 'one');Otherwise, an object will be used, all the keys will be transformed into string.
Just download the index.js as Multimap.js.
<script src=Multimap.js"></script>
<script>
var map = new Multimap([['a', 1], ['b', 2], ['c', 3]]);
map = map.set('b', 20);
map.get('b'); // [2, 20]
</script>
Or use as an AMD loader:
require(['./Multimap.js'], function (Multimap) {
var map = new Multimap([['a', 1], ['b', 2], ['c', 3]]);
map = map.set('b', 20);
map.get('b'); // [2, 20]
});
Object.defineProperty and Array.prototype.forEach.Following shows how to use Multimap:
var Multimap = require('multimap');
var map = new Multimap([['a', 'one'], ['b', 1], ['a', 'two'], ['b', 2]]);
map.size; // 4
map.count; // 2
map.get('a'); // ['one', 'two']
map.get('b'); // [1, 2]
map.has('a'); // true
map.has('foo'); // false
map.has('a', 'one'); // true
map.has('b', 3); // false
map.set('a', 'three');
map.size; // 5
map.count; // 2
map.get('a'); // ['one', 'two', 'three']
map.set('b', 3, 4);
map.size; // 7
map.count; // 2
map.delete('a', 'three'); // true
map.delete('x'); // false
map.delete('a', 'four'); // false
map.delete('b'); // true
map.size; // 2
map.count; // 1
map.set('b', 1, 2);
map.size; // 4
map.count; // 2
map.forEach(function (value, key) {
// iterates { 'one', 'a' }, { 'two', 'a' }, { 1, b }, { 2, 'b' }
});
map.forEachEntry(function (entry, key) {
// iterates {['one', 'two'], 'a' }, {[1, 2], 'b' }
});
var keys = map.keys(); // iterator with ['a', 'b']
keys.next().value; // 'a'
var values = map.values(); // iterator ['one', 'two', 1, 2]
map.clear(); // undefined
map.size; // 0
map.count; // 0Get the status of a file with some features.
Wrapper around standard method fs.lstat and fs.stat with some features.
npm install @nodelib/fs.stat
Returns an instance of fs.Stats class for provided path with standard callback-style.
fsStat.stat('path', (error, stats) => { /* … */ });
fsStat.stat('path', {}, (error, stats) => { /* … */ });
fsStat.stat('path', new fsStat.Settings(), (error, stats) => { /* … */ });Returns an instance of fs.Stats class for provided path.
const stats = fsStat.stat('path');
const stats = fsStat.stat('path', {});
const stats = fsStat.stat('path', new fsStat.Settings());truestring | Buffer | URLA path to a file. If a URL is provided, it must use the file: protocol.
falseOptions | SettingsSettings classAn Options object or an instance of Settings class.
:book: When you pass a plain object, an instance of the
Settingsclass will be created automatically. If you plan to call the method frequently, use a pre-created instance of theSettingsclass.
A class of full settings of the package.
const settings = new fsStat.Settings({ followSymbolicLink: false });
const stats = fsStat.stat('path', settings);followSymbolicLinkbooleantrueFollow symbolic link or not. Call fs.stat on symbolic link if true.
markSymbolicLinkbooleanfalseMark symbolic link by setting the return value of isSymbolicLink function to always true (even after fs.stat).
:book: Can be used if you want to know what is hidden behind a symbolic link, but still continue to know that it is a symbolic link.
throwErrorOnBrokenSymbolicLinkbooleantrueThrow an error when symbolic link is broken if true or safely return lstat call if false.
fsFileSystemAdapterBy default, the built-in Node.js module (fs) is used to work with the file system. You can replace any method with your own.
interface FileSystemAdapter {
lstat?: typeof fs.lstat;
stat?: typeof fs.stat;
lstatSync?: typeof fs.lstatSync;
statSync?: typeof fs.statSync;
}
const settings = new fsStat.Settings({
fs: { lstat: fakeLstat }
});See the Releases section of our GitHub project for changelog for each release version.
This is an extension for node’s fs.writeFile that makes its operation atomic and allows you set ownership (uid/gid of the file).
Atomically and asynchronously writes data to a file, replacing the file if it already exists. data can be a string or a buffer.
The file is initially named filename + "." + murmurhex(__filename, process.pid, ++invocations). Note that require('worker_threads').threadId is used in addition to process.pid if running inside of a worker thread. If writeFile completes successfully then, if passed the chown option it will change the ownership of the file. Finally it renames the file back to the filename you specified. If it encounters errors at any of these steps it will attempt to unlink the temporary file and then pass the error back to the caller. If multiple writes are concurrently issued to the same file, the write operations are put into a queue and serialized in the order they were called, using Promises. Writes to different files are still executed in parallel.
If provided, the chown option requires both uid and gid properties or else you’ll get an error. If chown is not specified it will default to using the owner of the previous file. To prevent chown from being ran you can also pass false, in which case the file will be created with the current user’s credentials.
If mode is not specified, it will default to using the permissions from an existing file, if any. Expicitly setting this to false remove this default, resulting in a file created with the system default permissions.
If options is a String, it’s assumed to be the encoding option. The encoding option is ignored if data is a buffer. It defaults to ‘utf8’.
If the fsync option is false, writeFile will skip the final fsync call.
If the tmpfileCreated option is specified it will be called with the name of the tmpfile when created.
Example:
writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}}, function (err) {
if (err) throw err;
console.log('It\'s saved!');
});This function also supports async/await:
(async () => {
try {
await writeFileAtomic('message.txt', 'Hello Node', {chown:{uid:100,gid:50}});
console.log('It\'s saved!');
} catch (err) {
console.error(err);
process.exit(1);
}
})();The synchronous version of writeFileAtomic.
npm install run-parallel
Run the tasks array of functions in parallel, without waiting until the previous function has completed. If any of the functions pass an error to its callback, the main callback is immediately called with the value of the error. Once the tasks have completed, the results are passed to the final callback as an array.
It is also possible to use an object instead of an array. Each property will be run as a function and the results will be passed to the final callback as an object instead of an array. This can be a more readable way of handling the results.
tasks - An array or object containing functions to run. Each function is passed a callback(err, result) which it must call on completion with an error err (which can be null) and an optional result value.callback(err, results) - An optional callback to run once all the functions have completed. This function gets a results array (or object) containing all the result arguments passed to the task callbacks.var parallel = require('run-parallel')
parallel([
function (callback) {
setTimeout(function () {
callback(null, 'one')
}, 200)
},
function (callback) {
setTimeout(function () {
callback(null, 'two')
}, 100)
}
],
// optional callback
function (err, results) {
// the results array will equal ['one','two'] even though
// the second function had a shorter timeout.
})This module is basically equavalent to async.parallel, but it’s handy to just have the one function you need instead of the kitchen sink. Modularity! Especially handy if you’re serving to the browser and need to reduce your javascript bundle size.
Works great in the browser with browserify!

Social Media Photo by Matt Seymour on Unsplash
A super light (0.5K) and fast circular JSON parser, directly from the creator of CircularJSON.
Now available also for PHP.
Usable via CDN or as regular module.
// ESM
import {parse, stringify} from 'flatted';
// CJS
const {parse, stringify} = require('flatted');
const a = [{}];
a[0].a = a;
a.push(a);
stringify(a); // [["1","0"],{"a":"0"}]As it is for every other specialized format capable of serializing and deserializing circular data, you should never JSON.parse(Flatted.stringify(data)), and you should never Flatted.parse(JSON.stringify(data)).
The only way this could work is to Flatted.parse(Flatted.stringify(data)), as it is also for CircularJSON or any other, otherwise there’s no granted data integrity.
Also please note this project serializes and deserializes only data compatible with JSON, so that sockets, or anything else with internal classes different from those allowed by JSON standard, won’t be serialized and unserialized as expected.
.parse(string, reviver) and revive your own objects.space parameter to .stringify(object, replacer, space) for feature parity with JSON signature.All ECMAScript engines compatible with Map, Set, Object.keys, and Array.prototype.reduce will work, even if polyfilled.
While stringifying, all Objects, including Arrays, and strings, are flattened out and replaced as unique index. *
Once parsed, all indexes will be replaced through the flattened collection.
* represented as string to avoid conflicts with numbers
// logic example
var a = [{one: 1}, {two: '2'}];
a[0].a = a;
// a is the main object, will be at index '0'
// {one: 1} is the second object, index '1'
// {two: '2'} the third, in '2', and it has a string
// which will be found at index '3'
Flatted.stringify(a);
// [["1","2"],{"one":1,"a":"0"},{"two":"3"},"2"]
// a[one,two] {one: 1, a} {two: '2'} '2'The bare-bones internationalization library used by yargs.
Inspired by i18n.
simple string translation:
output:
my awesome string foo
using tagged template literals
output:
my awesome string foo
pluralization support:
output:
2 fishes foo
As of v5 y18n supports Deno:
import y18n from "https://deno.land/x/y18n/deno.ts";
const __ = y18n({
locale: 'pirate',
directory: './test/locales'
}).__
console.info(__`Hi, ${'Ben'} ${'Coe'}!`)You will need to run with --allow-read to load alternative locales.
The JSON language files should be stored in a ./locales folder. File names correspond to locales, e.g., en.json, pirate.json.
When strings are observed for the first time they will be added to the JSON file corresponding to the current locale.
Create an instance of y18n with the config provided, options include:
directory: the locale directory, default ./locales.updateFiles: should newly observed strings be updated in file, default true.locale: what locale should be used.fallbackToLanguage: should fallback to a language-only file (e.g. en.json) be allowed if a file matching the locale does not exist (e.g. en_US.json), default true.Print a localized string, %s will be replaced with args.
This function can also be used as a tag for a template literal. You can use it like this: __`hello ${‘world’}`. This will be equivalent to __('hello %s', 'world').
Print a localized string with appropriate pluralization. If %d is provided in the string, the count will replace this placeholder.
Set the current locale being used.
What locale is currently being used?
Update the current locale with the key value pairs in obj.
Libraries in this ecosystem make a best effort to track Node.js’ release schedule. Here’s a post on why we think this is important.
ISC
An implementation of WHATWG AbortController interface.
import AbortController from "abort-controller"
const controller = new AbortController()
const signal = controller.signal
signal.addEventListener("abort", () => {
console.log("aborted!")
})
controller.abort()https://jsfiddle.net/1r2994qp/1/
Use npm to install then use a bundler.
npm install abort-controller
Or download from dist directory.
import AbortController from "abort-controller"
// or
const AbortController = require("abort-controller")
// or UMD version defines a global variable:
const AbortController = window.AbortControllerShimIf your bundler recognizes browser field of package.json, the imported AbortController is the native one and it doesn’t contain shim (even if the native implementation was nothing). If you wanted to polyfill AbortController for IE, use abort-controller/polyfill.
Importing abort-controller/polyfill assigns the AbortController shim to the AbortController global variable if the native implementation was nothing.
https://dom.spec.whatwg.org/#interface-abortcontroller
The AbortSignal object which is associated to this controller.
Notify abort event to listeners that the signal has.
Contributing is welcome ❤️
Please use GitHub issues/PRs.
npm install installs dependencies for development.npm test runs tests and measures code coverage.npm run clean removes temporary files of tests.npm run coverage opens code coverage of the previous test with your default browser.npm run lint runs ESLint.npm run build generates dist codes.npm run watch runs tests on each file change.Returns true if a value exists, false if empty. Works with deeply nested values using object paths.
Install with npm:
Works for:
true as the last arg to treat zero as a value instead of falsey)Works with nested object paths or a single value:
var hasValue = require('has-value');
hasValue({a: {b: {c: 'foo'}}} 'a.b.c');
//=> true
hasValue('a');
//=> true
hasValue('');
//=> false
hasValue(1);
//=> true
hasValue(0);
//=> false
hasValue(0, true); // pass `true` as the last arg to treat zero as a value
//=> true
hasValue({a: 'a'}});
//=> true
hasValue({}});
//=> false
hasValue(['a']);
//=> true
hasValue([]);
//=> false
hasValue(function(foo) {}); // function length/arity
//=> true
hasValue(function() {});
//=> false
hasValue(true);
hasValue(false);
//=> trueTo do the opposite and test for empty values, do:
You might also be interested in these projects:
a.b.c paths. | homepagea.b.c) to get a nested value from an object. | homepage'a.b.c') paths. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Generate readme and API documentation with verb:
Or, if verb is installed globally:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb, v, on March 27, 2016. # is-negated-glob
Returns an object with a
negatedboolean and the!stripped from negation patterns. Also respects extglobs.
Install with npm:
var isNegatedGlob = require('is-negated-glob');
console.log(isNegatedGlob('foo'));
// { pattern: 'foo', negated: false }
console.log(isNegatedGlob('!foo'));
// { pattern: 'foo', negated: true }
console.log(isNegatedGlob('!(foo)'));
// extglob patterns are ignored
// { pattern: '!(foo)', negated: false }true if the given string looks like a glob pattern or an extglob pattern… more | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.1.30, on September 08, 2016. # concordance
Compare, format, diff and serialize any JavaScript value. Built for Node.js 10 and above.
Concordance recursively describes JavaScript values, whether they’re booleans or complex object structures. It recurses through all enumerable properties, list items (e.g. arrays) and iterator entries.
The same algorithm is used when comparing, formatting or diffing values. This means Concordance’s behavior is consistent, no matter how you use it.
Object(1) as different from 1.-0 is distinct from 0.NaN equals NaN.Argument values can be compared to a regular array.Error names and messages are always compared, even if these are not enumerable properties.Function values are compared by identity only. Names are always formatted and serialized.Global objects are considered equal.Map keys and Set items are compared in-order.Object string properties are compared according to the traversal order. Symbol properties are compared by identity.Promise values are compared by identity only.Symbol values are compared by identity only.Concordance strives to format every aspect of a value that is used for comparisons. Formatting is optimized for human legibility.
Strings enjoy special formatting:
Similarly, line breaks in symbol descriptions are escaped.
Concordance tries to minimize diff lines. This is difficult with object values, which may have similar properties but a different constructor. Multi-line strings are compared line-by-line.
Concordance can serialize any value for later use. Deserialized values can be compared to each other or to regular JavaScript values. The deserialized value should be passed as the actual value to the comparison and diffing methods. Certain value comparisons behave differently when the actual value is deserialized:
Argument values can only be compared to other Argument values.Function values are compared by name.Promise values are compared by their constructor and additional enumerable properties, but not by identity.Symbol values are compared by their string serialization. Registered and well-known symbols will never equal symbols with similar descriptions.Turn a *-wildcard style glob ("*.min.js") into a regular expression (/^.*\.min\.js$/)!
To match bash-like globs, eg. ? for any single-character match, [a-z] for character ranges, and {*.html, *.js} for multiple alternatives, call with { extended: true }.
To obey globstars ** rules set option {globstar: true}. NOTE: This changes the behavior of * when globstar is true as shown below: When {globstar: true}: /foo/** will match any string that starts with /foo/ like /foo/index.htm, /foo/bar/baz.txt, etc. Also, /foo/**/*.txt will match any string that starts with /foo/ and ends with .txt like /foo/bar.txt, /foo/bar/baz.txt, etc. Whereas /foo/* (single *, not a globstar) will match strings that start with /foo/ like /foo/index.htm, /foo/baz.txt but will not match strings that contain a / to the right like /foo/bar/baz.txt, /foo/bar/baz/qux.dat, etc.
Set flags on the resulting RegExp object by adding the flags property to the option object, eg { flags: "i" } for ignoring case.
npm install glob-to-regexp
var globToRegExp = require('glob-to-regexp');
var re = globToRegExp("p*uck");
re.test("pot luck"); // true
re.test("pluck"); // true
re.test("puck"); // true
re = globToRegExp("*.min.js");
re.test("http://example.com/jquery.min.js"); // true
re.test("http://example.com/jquery.min.js.map"); // false
re = globToRegExp("*/www/*.js");
re.test("http://example.com/www/app.js"); // true
re.test("http://example.com/www/lib/factory-proxy-model-observer.js"); // true
// Extended globs
re = globToRegExp("*/www/{*.js,*.html}", { extended: true });
re.test("http://example.com/www/app.js"); // true
re.test("http://example.com/www/index.html"); // trueAll rights reserved.
Get the first matching pair of braces:
var balanced = require('balanced-match');
console.log(balanced('{', '}', 'pre{in{nested}}post'));
console.log(balanced('{', '}', 'pre{first}between{second}post'));
console.log(balanced(/\s+\{\s+/, /\s+\}\s+/, 'pre { in{nest} } post'));The matches are:
$ node example.js
{ start: 3, end: 14, pre: 'pre', body: 'in{nested}', post: 'post' }
{ start: 3,
end: 9,
pre: 'pre',
body: 'first',
post: 'between{second}post' }
{ start: 3, end: 17, pre: 'pre', body: 'in{nest}', post: 'post' }For the first non-nested matching pair of a and b in str, return an object with those keys:
aba and b not includeda and b not includeda and b not includedIf there’s no match, undefined will be returned.
If the str contains more a than b / there are unmatched pairs, the first match that was closed will be used. For example, {{a} will match ['{', 'a', ''] and {a}} will match ['', 'a', '}'].
For the first non-nested matching pair of a and b in str, return an array with indexes: [ <a index>, <b index> ].
If there’s no match, undefined will be returned.
If the str contains more a than b / there are unmatched pairs, the first match that was closed will be used. For example, {{a} will match [ 1, 3 ] and {a}} will match [0, 2].
With npm do:
Returns true if a value has the characteristics of a valid JavaScript data descriptor.
Install with npm:
true when the descriptor has valid properties with valid values.
// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> truefalse when not an object
false when the object has invalid properties
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> falsefalse when a value is not the correct type
isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> falseThe only valid data descriptor properties are the following:
configurable (required)enumerable (required)value (optional)writable (optional)To be a valid data descriptor, either value or writable must be defined.
Invalid properties
A descriptor may have additional invalid properties (an error will not be thrown).
var foo = {};
Object.defineProperty(foo, 'bar', {
enumerable: true,
whatever: 'blah', // invalid, but doesn't cause an error
get: function() {
return 'baz';
}
});
console.log(foo.bar);
//=> 'baz'Install dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb on December 28, 2015. # fast-deep-equal The fastest deep equal with ES6 Map, Set and Typed arrays support.
ES6 equal (require('fast-deep-equal/es6')) also supports: - Maps - Sets - Typed arrays
To support ES6 Maps, Sets and Typed arrays equality use:
var equal = require('fast-deep-equal/es6');
console.log(equal(Int16Array([1, 2]), Int16Array([1, 2]))); // trueTo use with React (avoiding the traversal of React elements’ _owner property that contains circular references and is not needed when comparing the elements - borrowed from react-fast-compare):
Node.js v12.6.0:
fast-deep-equal x 261,950 ops/sec ±0.52% (89 runs sampled)
fast-deep-equal/es6 x 212,991 ops/sec ±0.34% (92 runs sampled)
fast-equals x 230,957 ops/sec ±0.83% (85 runs sampled)
nano-equal x 187,995 ops/sec ±0.53% (88 runs sampled)
shallow-equal-fuzzy x 138,302 ops/sec ±0.49% (90 runs sampled)
underscore.isEqual x 74,423 ops/sec ±0.38% (89 runs sampled)
lodash.isEqual x 36,637 ops/sec ±0.72% (90 runs sampled)
deep-equal x 2,310 ops/sec ±0.37% (90 runs sampled)
deep-eql x 35,312 ops/sec ±0.67% (91 runs sampled)
ramda.equals x 12,054 ops/sec ±0.40% (91 runs sampled)
util.isDeepStrictEqual x 46,440 ops/sec ±0.43% (90 runs sampled)
assert.deepStrictEqual x 456 ops/sec ±0.71% (88 runs sampled)
The fastest is fast-deep-equal
To run benchmark (requires node.js 6+):
Please note: this benchmark runs against the available test cases. To choose the most performant library for your application, it is recommended to benchmark against your data and to NOT expect this benchmark to reflect the performance difference in your application.
To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues.
@version 1.4.0
@date 2015-10-26
@stability 3 - Stable

Compare strings containing a mix of letters and numbers in the way a human being would in sort order. This is described as a “natural ordering”.
Standard sorting: Natural order sorting:
img1.png img1.png
img10.png img2.png
img12.png img10.png
img2.png img12.png
String.naturalCompare returns a number indicating whether a reference string comes before or after or is the same as the given string in sort order. Use it with builtin sort() function.
npm install natural-compare-lite// Simple case sensitive example
var a = ["z1.doc", "z10.doc", "z17.doc", "z2.doc", "z23.doc", "z3.doc"];
a.sort(String.naturalCompare);
// ["z1.doc", "z2.doc", "z3.doc", "z10.doc", "z17.doc", "z23.doc"]
// Use wrapper function for case insensitivity
a.sort(function(a, b){
return String.naturalCompare(a.toLowerCase(), b.toLowerCase());
})
// In most cases we want to sort an array of objects
var a = [ {"street":"350 5th Ave", "room":"A-1021"}
, {"street":"350 5th Ave", "room":"A-21046-b"} ];
// sort by street, then by room
a.sort(function(a, b){
return String.naturalCompare(a.street, b.street) || String.naturalCompare(a.room, b.room);
})
// When text transformation is needed (eg toLowerCase()),
// it is best for performance to keep
// transformed key in that object.
// There are no need to do text transformation
// on each comparision when sorting.
var a = [ {"make":"Audi", "model":"A6"}
, {"make":"Kia", "model":"Rio"} ];
// sort by make, then by model
a.map(function(car){
car.sort_key = (car.make + " " + car.model).toLowerCase();
})
a.sort(function(a, b){
return String.naturalCompare(a.sort_key, b.sort_key);
})It is possible to configure a custom alphabet to achieve a desired order.
// Estonian alphabet
String.alphabet = "ABDEFGHIJKLMNOPRSŠZŽTUVÕÄÖÜXYabdefghijklmnoprsšzžtuvõäöüxy"
["t", "z", "x", "õ"].sort(String.naturalCompare)
// ["z", "t", "õ", "x"]
// Russian alphabet
String.alphabet = "АБВГДЕЁЖЗИЙКЛМНОПРСТУФХЦЧШЩЪЫЬЭЮЯабвгдеёжзийклмнопрстуфхцчшщъыьэюя"
["Ё", "А", "Б"].sort(String.naturalCompare)
// ["А", "Б", "Ё"]HTTP response freshness testing
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
npm install fresh
Check freshness of the response using request and response headers.
When the response is still “fresh” in the client’s cache true is returned, otherwise false is returned to indicate that the client cache is now stale and the full response should be sent.
When a client sends the Cache-Control: no-cache request header to indicate an end-to-end reload request, this module will return false to make handling these requests transparent.
This module is designed to only follow the HTTP specifications, not to work-around all kinda of client bugs (especially since this module typically does not recieve enough information to understand what the client actually is).
There is a known issue that in certain versions of Safari, Safari will incorrectly make a request that allows this module to validate freshness of the resource even when Safari does not have a representation of the resource in the cache. The module jumanji can be used in an Express application to work-around this issue and also provides links to further reading on this Safari bug.
var reqHeaders = { 'if-none-match': '"foo"' }
var resHeaders = { 'etag': '"bar"' }
fresh(reqHeaders, resHeaders)
// => false
var reqHeaders = { 'if-none-match': '"foo"' }
var resHeaders = { 'etag': '"foo"' }
fresh(reqHeaders, resHeaders)
// => truevar fresh = require('fresh')
var http = require('http')
var server = http.createServer(function (req, res) {
// perform server logic
// ... including adding ETag / Last-Modified response headers
if (isFresh(req, res)) {
// client has a fresh copy of resource
res.statusCode = 304
res.end()
return
}
// send the resource
res.statusCode = 200
res.end('hello, world!')
})
function isFresh (req, res) {
return fresh(req.headers, {
'etag': res.getHeader('ETag'),
'last-modified': res.getHeader('Last-Modified')
})
}
server.listen(3000)json-parse-even-better-errors is a Node.js library for getting nicer errors out of JSON.parse(), including context and position of the parse errors.
It also preserves the newline and indentation styles of the JSON data, by putting them in the object or array in the Symbol.for('indent') and Symbol.for('newline') properties.
npm install --save json-parse-even-better-errors
const parseJson = require('json-parse-even-better-errors')
parseJson('"foo"') // returns the string 'foo'
parseJson('garbage') // more useful error message
parseJson.noExceptions('garbage') // returns undefinednoExceptions method that returns undefined rather than throwing.Symbol.for('newline') property on objects and arrays.Symbol.for('indent') property on objects and arrays.To preserve indentation when the file is saved back to disk, use data[Symbol.for('indent')] as the third argument to JSON.stringify, and if you want to preserve windows \r\n newlines, replace the \n chars in the string with data[Symbol.for('newline')].
For example:
const txt = await readFile('./package.json', 'utf8')
const data = parseJsonEvenBetterErrors(txt)
const indent = Symbol.for('indent')
const newline = Symbol.for('newline')
// .. do some stuff to the data ..
const string = JSON.stringify(data, null, data[indent]) + '\n'
const eolFixed = data[newline] === '\n' ? string
: string.replace(/\n/g, data[newline])
await writeFile('./package.json', eolFixed)Indentation is determined by looking at the whitespace between the initial { and [ and the character that follows it. If you have lots of weird inconsistent indentation, then it won’t track that or give you any way to preserve it. Whether this is a bug or a feature is debatable ;)
parse(txt, reviver = null, context = 20)Works just like JSON.parse, but will include a bit more information when an error happens, and attaches a Symbol.for('indent') and Symbol.for('newline') on objects and arrays. This throws a JSONParseError.
parse.noExceptions(txt, reviver = null)Works just like JSON.parse, but will return undefined rather than throwing an error.
class JSONParseError(er, text, context = 20, caller = null)Extends the JavaScript SyntaxError class to parse the message and provide better metadata.
Pass in the error thrown by the built-in JSON.parse, and the text being parsed, and it’ll parse out the bits needed to be helpful.
context defaults to 20.
Set a caller function to trim internal implementation details out of the stack trace. When calling parseJson, this is set to the parseJson function. If not set, then the constructor defaults to itself, so the stack trace will point to the spot where you call new JSONParseError.
JavaScript MD5 implementation.
Compatible with server-side environments like Node.js, module loaders like RequireJS or webpack and all web browsers.
Install the blueimp-md5 package with NPM:
Include the (minified) JavaScript MD5 script in your HTML markup:
In your application code, calculate the (hex-encoded) MD5 hash of a string by calling the md5 method with the string as argument:
The following is an example how to use the JavaScript MD5 module on the server-side with Node.js.
Install the blueimp-md5 package with NPM:
Add a file server.js with the following content:
require('http')
.createServer(function (req, res) {
// The md5 module exports the md5() function:
var md5 = require('./md5'),
// Use the following version if you installed the package with npm:
// var md5 = require("blueimp-md5"),
url = require('url'),
query = url.parse(req.url).query
res.writeHead(200, { 'Content-Type': 'text/plain' })
// Calculate and print the MD5 hash of the url query:
res.end(md5(query))
})
.listen(8080, 'localhost')
console.log('Server running at http://localhost:8080/')Run the application with the following command:
The JavaScript MD5 script has zero dependencies.
Calculate the (hex-encoded) MD5 hash of a given string value:
Calculate the (hex-encoded) HMAC-MD5 hash of a given string value and key:
Calculate the raw MD5 hash of a given string value:
Calculate the raw HMAC-MD5 hash of a given string value and key:
The JavaScript MD5 project comes with Unit Tests.
There are two different ways to run the tests:
npm test in the Terminal in the root path of the repository package.The first one tests the browser integration, the second one the Node.js integration.
An efficient Javascript implementation of the Levenshtein algorithm with locale-specific collator support.
Install using npm:
Using bower:
If you are not using any module loader system then the API will then be accessible via the window.Levenshtein object.
Default usage
var levenshtein = require('fast-levenshtein');
var distance = levenshtein.get('back', 'book'); // 2
var distance = levenshtein.get('我愛你', '我叫你'); // 1Locale-sensitive string comparisons
It supports using Intl.Collator for locale-sensitive string comparisons:
var levenshtein = require('fast-levenshtein');
levenshtein.get('mikailovitch', 'Mikhaïlovitch', { useCollator: true});
// 1To build the code and run the tests:
Thanks to Titus Wormer for encouraging me to do this.
Benchmarked against other node.js levenshtein distance modules (on Macbook Air 2012, Core i7, 8GB RAM):
Running suite Implementation comparison [benchmark/speed.js]...
>> levenshtein-edit-distance x 234 ops/sec ±3.02% (73 runs sampled)
>> levenshtein-component x 422 ops/sec ±4.38% (83 runs sampled)
>> levenshtein-deltas x 283 ops/sec ±3.83% (78 runs sampled)
>> natural x 255 ops/sec ±0.76% (88 runs sampled)
>> levenshtein x 180 ops/sec ±3.55% (86 runs sampled)
>> fast-levenshtein x 1,792 ops/sec ±2.72% (95 runs sampled)
Benchmark done.
Fastest test is fast-levenshtein at 4.2x faster than levenshtein-componentYou can run this benchmark yourself by doing:
If you wish to submit a pull request please update and/or create new tests for any changes you make and ensure the grunt build passes.
See CONTRIBUTING.md for details.
Returns a filtered copy of an object with only the specified keys, similar to
_.pickfrom lodash / underscore.
You might also be interested in object.omit.
Install with npm:
This is the fastest implementation I tested. Pull requests welcome!
var pick = require('object.pick');
pick({a: 'a', b: 'b'}, 'a')
//=> {a: 'a'}
pick({a: 'a', b: 'b', c: 'c'}, ['a', 'b'])
//=> {a: 'a', b: 'b'}a.b.c) to get a nested value from an object. | homepage'a.b.c') paths. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.2.0, on October 27, 2016. # collection-visit
Visit a method over the items in an object, or map visit over the objects in an array.
Install with npm:
var visit = require('collection-visit');
var ctx = {
data: {},
set: function (key, value) {
if (typeof key === 'object') {
visit(ctx, 'set', key);
} else {
ctx.data[key] = value;
}
}
};
ctx.set('a', 'a');
ctx.set('b', 'b');
ctx.set('c', 'c');
ctx.set({d: {e: 'f'}});
console.log(ctx.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }};visit over an array of objects. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 13 | jonschlinkert |
| 9 | doowb |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 09, 2017. # tslib
This is a runtime library for TypeScript that contains all of the TypeScript helper functions.
This library is primarily used by the --importHelpers flag in TypeScript. When using --importHelpers, a module that uses helper functions like __extends and __assign in the following emitted file:
var __assign = (this && this.__assign) || Object.assign || function(t) {
for (var s, i = 1, n = arguments.length; i < n; i++) {
s = arguments[i];
for (var p in s) if (Object.prototype.hasOwnProperty.call(s, p))
t[p] = s[p];
}
return t;
};
exports.x = {};
exports.y = __assign({}, exports.x);will instead be emitted as something like the following:
Because this can avoid duplicate declarations of things like __extends, __assign, etc., this means delivering users smaller files on average, as well as less runtime overhead. For optimized bundles with TypeScript, you should absolutely consider using tslib and --importHelpers.
For the latest stable version, run:
# TypeScript 2.3.3 or later
bower install tslib
# TypeScript 2.3.2 or earlier
bower install tslib@1.6.1# TypeScript 2.3.3 or later
jspm install tslib
# TypeScript 2.3.2 or earlier
jspm install tslib@1.6.1Set the importHelpers compiler option on the command line:
tsc --importHelpers file.ts
or in your tsconfig.json:
You will need to add a paths mapping for tslib, e.g. For Bower users:
{
"compilerOptions": {
"module": "amd",
"importHelpers": true,
"baseUrl": "./",
"paths": {
"tslib" : ["bower_components/tslib/tslib.d.ts"]
}
}
}For JSPM users:
{
"compilerOptions": {
"module": "system",
"importHelpers": true,
"baseUrl": "./",
"paths": {
"tslib" : ["jspm_packages/npm/tslib@1.[version].0/tslib.d.ts"]
}
}
}There are many ways to contribute to TypeScript.
Iterate over the own and inherited enumerable properties of an object, and return an object with properties that evaluate to true from the callback. Exit early by returning
false. JavaScript/Node.js
Install with npm:
var forIn = require('for-in');
var obj = {a: 'foo', b: 'bar', c: 'baz'};
var values = [];
var keys = [];
forIn(obj, function (value, key, o) {
keys.push(key);
values.push(value);
});
console.log(keys);
//=> ['a', 'b', 'c'];
console.log(values);
//=> ['foo', 'bar', 'baz'];Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 16 | jonschlinkert |
| 2 | paulirish |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.4.2, on February 28, 2017. # Statuses
HTTP status utility for node.
This module provides a list of status codes and messages sourced from a few different projects:
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
If Integer or String is a valid HTTP code or status message, then the appropriate code will be returned. Otherwise, an error will be thrown.
status(403) // => 403
status('403') // => 403
status('forbidden') // => 403
status('Forbidden') // => 403
status(306) // throws, as it's not supported by node.jsReturns an object which maps status codes to status messages, in the same format as the Node.js http module.
Returns an array of all the status codes as Integers.
Map of code to status message. undefined for invalid codes.
Map of status message to code. msg can either be title-cased or lower-cased. undefined for invalid status messages.
Returns true if a status code is a valid redirect status.
Returns true if a status code expects an empty body.
Returns true if you should retry the rest.
Like request, but much smaller - and with less options. Uses node-fetch under the hood. Pop it in where you would use request. Improves load and parse time of modules.
const request = require('teeny-request').teenyRequest;
request({uri: 'http://ip.jsontest.com/'}, function (error, response, body) {
console.log('error:', error); // Print the error if one occurred
console.log('statusCode:', response && response.statusCode); // Print the response status code if a response was received
console.log('body:', body); // Print the JSON.
});For TypeScript, you can use @types/request.
import {teenyRequest as request} from 'teeny-request';
import r as * from 'request'; // Only for type declarations
request({uri: 'http://ip.jsontest.com/'}, (error: any, response: r.Response, body: any) => {
console.log('error:', error); // Print the error if one occurred
console.log('statusCode:', response && response.statusCode); // Print the response status code if a response was received
console.log('body:', body); // Print the JSON.
});Options are limited to the following
The callback argument gets 3 arguments:
Set default options for every teenyRequest call.
let defaultRequest = teenyRequest.defaults({timeout: 60000});
defaultRequest({uri: 'http://ip.jsontest.com/'}, function (error, response, body) {
assert.ifError(error);
assert.strictEqual(response.statusCode, 200);
console.log(body.ip);
assert.notEqual(body.ip, null);
done();
});If environment variables HTTP_PROXY or HTTPS_PROXY are set, they are respected. NO_PROXY is currently not implemented.
Since 4.0.0, Webpack uses javascript/esm for .mjs files which handles ESM more strictly compared to javascript/auto. If you get the error Can't import the named export 'PassThroughfrom non EcmaScript module, please add the following to your Webpack config:
request has a ton of options and features and is accordingly large. Requiering a module incurs load and parse time. For request, that is around 600ms.

teeny-request doesn’t have any of the bells and whistles that request has, but is so much faster to load. If startup time is an issue and you don’t need much beyong a basic GET and POST, you can use teeny-request.
Special thanks to billyjacobson for suggesting the name. Please report all bugs to them. Just kidding. Please open issues.
Say you’re using the ‘buffer’ module on npm, or browserify and you’re working with lots of binary data.
Unfortunately, sometimes the browser or someone else’s API gives you a typed array like Uint8Array to work with and you need to convert it to a Buffer. What do you do?
Of course: Buffer.from(uint8array)
But, alas, every time you do Buffer.from(uint8array) the entire array gets copied. The Buffer constructor does a copy; this is defined by the node docs and the ‘buffer’ module matches the node API exactly.
So, how can we avoid this expensive copy in performance critical applications?
Simply use this module, of course!
If you have an ArrayBuffer, you don’t need this module, because Buffer.from(arrayBuffer) is already efficient.
To convert a typed array to a Buffer without a copy, do this:
var toBuffer = require('typedarray-to-buffer')
var arr = new Uint8Array([1, 2, 3])
arr = toBuffer(arr)
// arr is a buffer now!
arr.toString() // '\u0001\u0002\u0003'
arr.readUInt16BE(0) // 258If the browser supports typed arrays, then toBuffer will augment the typed array you pass in with the Buffer methods and return it. See how does Buffer work? for more about how augmentation works.
This module uses the typed array’s underlying ArrayBuffer to back the new Buffer. This respects the “view” on the ArrayBuffer, i.e. byteOffset and byteLength. In other words, if you do toBuffer(new Uint32Array([1, 2, 3])), then the new Buffer will contain [1, 0, 0, 0, 2, 0, 0, 0, 3, 0, 0, 0], not [1, 2, 3]. And it still doesn’t require a copy.
If the browser doesn’t support typed arrays, then toBuffer will create a new Buffer object, copy the data into it, and return it. There’s no simple performance optimization we can do for old browsers. Oh well.
If this module is used in node, then it will just call Buffer.from. This is just for the convenience of modules that work in both node and the browser.
Returns true if a string has an extglob.
Install with npm:
True
isExtglob('?(abc)');
isExtglob('@(abc)');
isExtglob('!(abc)');
isExtglob('*(abc)');
isExtglob('+(abc)');False
Escaped extglobs:
isExtglob('\\?(abc)');
isExtglob('\\@(abc)');
isExtglob('\\!(abc)');
isExtglob('\\*(abc)');
isExtglob('\\+(abc)');Everything else…
isExtglob('foo.js');
isExtglob('!foo.js');
isExtglob('*.js');
isExtglob('**/abc.js');
isExtglob('abc/*.js');
isExtglob('abc/(aaa|bbb).js');
isExtglob('abc/[a-z].js');
isExtglob('abc/{a,b}.js');
isExtglob('abc/?.js');
isExtglob('abc.js');
isExtglob('abc/def/ghi.js');v2.0
Adds support for escaping. Escaped exglobs no longer return true.
true if an array has a glob pattern. | homepagetrue if the given string looks like a glob pattern or an extglob pattern… more | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.1.31, on October 12, 2016. # map-cache
Basic cache object for storing key-value pairs.
Install with npm:
Creates a cache object to store key/value pairs.
Example
Adds value to key on the cache.
Params
key {String}: The key of the value to cache.value {any}: The value to cache.returns {Object}: Returns the Cache object for chaining.Example
Gets the cached value for key.
Params
key {String}: The key of the value to get.returns {any}: Returns the cached value.Example
Checks if a cached value for key exists.
Params
key {String}: The key of the entry to check.returns {Boolean}: Returns true if an entry for key exists, else false.Example
Removes key and its value from the cache.
Params
key {String}: The key of the value to remove.returns {Boolean}: Returns true if the entry was removed successfully, else false.Example
You might also be interested in these projects:
get, set, del, and has methods for node.js/javascript projects. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Generate readme and API documentation with verb:
Or, if verb is installed globally:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb, v0.9.0, on May 10, 2016. # fast-json-stable-stringify
Deterministic JSON.stringify() - a faster version of [@substack](https://github.com/substack)’s json-stable-strigify without jsonify.
You can also pass in a custom comparison function.
var stringify = require('fast-json-stable-stringify');
var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 };
console.log(stringify(obj));output:
{"a":3,"b":[{"x":4,"y":5,"z":6},7],"c":8}
Return a deterministic stringified string str from the object obj.
If opts is given, you can supply an opts.cmp to have a custom comparison function for object keys. Your function opts.cmp is called with these parameters:
For example, to sort on the object key names in reverse order you could write:
var stringify = require('fast-json-stable-stringify');
var obj = { c: 8, b: [{z:6,y:5,x:4},7], a: 3 };
var s = stringify(obj, function (a, b) {
return a.key < b.key ? 1 : -1;
});
console.log(s);which results in the output string:
{"c":8,"b":[{"z":6,"y":5,"x":4},7],"a":3}
Or if you wanted to sort on the object values in reverse order, you could write:
var stringify = require('fast-json-stable-stringify');
var obj = { d: 6, c: 5, b: [{z:3,y:2,x:1},9], a: 10 };
var s = stringify(obj, function (a, b) {
return a.value < b.value ? 1 : -1;
});
console.log(s);
which outputs:
{"d":6,"c":5,"b":[{"z":3,"y":2,"x":1},9],"a":10}
Pass true in opts.cycles to stringify circular property as __cycle__ - the result will not be a valid JSON string in this case.
TypeError will be thrown in case of circular object without this option.
With npm do:
npm install fast-json-stable-stringify
To run benchmark (requires Node.js 6+):
node benchmark
Results:
fast-json-stable-stringify x 17,189 ops/sec ±1.43% (83 runs sampled)
json-stable-stringify x 13,634 ops/sec ±1.39% (85 runs sampled)
fast-stable-stringify x 20,212 ops/sec ±1.20% (84 runs sampled)
faster-stable-stringify x 15,549 ops/sec ±1.12% (84 runs sampled)
The fastest is fast-stable-stringify
To report a security vulnerability, please use the Tidelift security contact. Tidelift will coordinate the fix and disclosure. Please do NOT report security vulnerability via GitHub issues.
This is a fork of eslint-scope, enhanced to support TypeScript functionality. You can view the original licence for the code here.
This package is consumed automatically by @typescript-eslint/parser. You probably don’t want to use it directly.
You can find our Getting Started docs here
$ yarn add -D typescript @typescript-eslint/scope-manager
$ npm i --save-dev typescript @typescript-eslint/scope-manageranalyze(tree, options)Analyses a given AST and returns the resulting ScopeManager.
interface AnalyzeOptions {
/**
* Known visitor keys.
*/
childVisitorKeys?: Record<string, string[]> | null;
/**
* Which ECMAScript version is considered.
* Defaults to `2018`.
*/
ecmaVersion?: EcmaVersion;
/**
* Whether the whole script is executed under node.js environment.
* When enabled, the scope manager adds a function scope immediately following the global scope.
* Defaults to `false`.
*/
globalReturn?: boolean;
/**
* Implied strict mode (if ecmaVersion >= 5).
* Defaults to `false`.
*/
impliedStrict?: boolean;
/**
* The identifier that's used for JSX Element creation (after transpilation).
* This should not be a member expression - just the root identifier (i.e. use "React" instead of "React.createElement").
* Defaults to `"React"`.
*/
jsxPragma?: string;
/**
* The identifier that's used for JSX fragment elements (after transpilation).
* If `null`, assumes transpilation will always use a member on `jsxFactory` (i.e. React.Fragment).
* This should not be a member expression - just the root identifier (i.e. use "h" instead of "h.Fragment").
* Defaults to `null`.
*/
jsxFragmentName?: string | null;
/**
* The lib used by the project.
* This automatically defines a type variable for any types provided by the configured TS libs.
* For more information, see https://www.typescriptlang.org/tsconfig#lib
*
* Defaults to the lib for the provided `ecmaVersion`.
*/
lib?: Lib[];
/**
* The source type of the script.
*/
sourceType?: 'script' | 'module';
}Example usage:
import { analyze } from '@typescript-eslint/scope-manager';
import { parse } from '@typescript-eslint/typescript-estree';
const code = `const hello: string = 'world';`;
const ast = parse(code, {
// note that scope-manager requires ranges on the AST
range: true,
});
const scope = analyze(ast, {
ecmaVersion: 2020,
sourceType: 'module',
});See the contributing guide here
Call a specified method on each value in the given object.
Install with npm:
var visit = require('object-visit');
var ctx = {
data: {},
set: function (key, value) {
if (typeof key === 'object') {
visit(ctx, 'set', key);
} else {
ctx.data[key] = value;
}
}
};
ctx.set('a', 'a');
ctx.set('b', 'b');
ctx.set('c', 'c');
ctx.set({d: {e: 'f'}});
console.log(ctx.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }};visit over an array of objects. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on May 30, 2017. # use
Easily add plugin support to your node.js application.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
A different take on plugin handling! This is not a middleware system, if you need something that handles async middleware, ware is great for that.
See the examples folder for usage examples.
Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 37 | jonschlinkert |
| 7 | charlike-old |
| 2 | doowb |
| 2 | wtgtybhertgeghgtwtg |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 12, 2018. # arr-union
Combines a list of arrays, returning a single array with unique values, using strict equality for comparisons.
Install with npm:
This library is 10-20 times faster and more performant than array-union.
See the benchmarks.
#1: five-arrays
array-union x 511,121 ops/sec ±0.80% (96 runs sampled)
arr-union x 5,716,039 ops/sec ±0.86% (93 runs sampled)
#2: ten-arrays
array-union x 245,196 ops/sec ±0.69% (94 runs sampled)
arr-union x 1,850,786 ops/sec ±0.84% (97 runs sampled)
#3: two-arrays
array-union x 563,869 ops/sec ±0.97% (94 runs sampled)
arr-union x 9,602,852 ops/sec ±0.87% (92 runs sampled)var union = require('arr-union');
union(['a'], ['b', 'c'], ['d', 'e', 'f']);
//=> ['a', 'b', 'c', 'd', 'e', 'f']Returns only unique elements:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Generate readme and API documentation with verb:
Or, if verb is installed globally:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb, v0.9.0, on February 23, 2016. glob-parent
====== Javascript module to extract the non-magic parent path from a glob string.
Examples
var globParent = require('glob-parent');
globParent('path/to/*.js'); // 'path/to'
globParent('/root/path/to/*.js'); // '/root/path/to'
globParent('/*.js'); // '/'
globParent('*.js'); // '.'
globParent('**/*.js'); // '.'
globParent('path/{to,from}'); // 'path'
globParent('path/!(to|from)'); // 'path'
globParent('path/?(to|from)'); // 'path'
globParent('path/+(to|from)'); // 'path'
globParent('path/*(to|from)'); // 'path'
globParent('path/@(to|from)'); // 'path'
globParent('path/**/*'); // 'path'
// if provided a non-glob path, returns the nearest dir
globParent('path/foo/bar.js'); // 'path/foo'
globParent('path/foo/'); // 'path/foo'
globParent('path/foo'); // 'path' (see issue #3 for details)The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters:
? (question mark)* (star)| (pipe)( (opening parenthesis)) (closing parenthesis){ (opening curly brace)} (closing curly brace)[ (opening bracket)] (closing bracket)Example
This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with expand-braces and/or expand-brackets.
Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators).
This cannot be used in conjunction with escape characters.
// BAD
globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)'
// GOOD
globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)'If you are using escape characters for a pattern without path parts (i.e. relative to cwd), prefix with ./ to avoid confusing glob-parent.
// BAD
globParent('foo \\[bar]') // 'foo '
globParent('foo \\[bar]*') // 'foo '
// GOOD
globParent('./foo \\[bar]') // 'foo [bar]'
globParent('./foo \\[bar]*') // '.'See release notes page on GitHub
offers foolproof deep cloning of objects, arrays, numbers, strings etc. in JavaScript.
npm install clone
(It also works with browserify, ender or standalone.)
var clone = require('clone');
var a, b;
a = { foo: { bar: 'baz' } }; // initial value of a
b = clone(a); // clone a -> b
a.foo.bar = 'foo'; // change a
console.log(a); // show a
console.log(b); // show bThis will print:
clone masters cloning simple objects (even with custom prototype), arrays, Date objects, and RegExp objects. Everything is cloned recursively, so that you can clone dates in arrays in objects, for example.
clone(val, circular, depth)
val – the value that you want to clone, any type allowedcircular – boolean
clone with circular set to false if you are certain that obj contains no circular references. This will give better performance if needed. There is no error if undefined or null is passed as obj.depth – depth to which the object is to be cloned (optional, defaults to infinity)
clone.clonePrototype(obj)
obj – the object that you want to cloneDoes a prototype clone as described by Oran Looney.
This will print:
So, b.myself points to b, not a. Neat!
npm test
Some special objects like a socket or process.stdout/stderr are known to not be cloneable. If you find other objects that cannot be cloned, please open an issue.
If you encounter any bugs or issues, feel free to open an issue at github or send me an email to paul@vorba.ch. I also always like to hear from you, if you’re using my code.
Returns true if an object was created by the
Objectconstructor.
Install with npm:
Use isobject if you only want to check if the value is an object and not an array or null.
true when created by the Object constructor.
isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> truefalse when not created by the Object constructor.
isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(null);
//=> false
isPlainObject(Object.create(null));
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 17 | jonschlinkert |
| 6 | stevenvachon |
| 3 | onokumus |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 11, 2017. # is-plain-object
Returns true if an object was created by the
Objectconstructor.
Install with npm:
Use isobject if you only want to check if the value is an object and not an array or null.
true when created by the Object constructor.
isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> truefalse when not created by the Object constructor.
isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(null);
//=> false
isPlainObject(Object.create(null));
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 17 | jonschlinkert |
| 6 | stevenvachon |
| 3 | onokumus |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 11, 2017. # is-plain-object
Returns true if an object was created by the
Objectconstructor.
Install with npm:
Use isobject if you only want to check if the value is an object and not an array or null.
true when created by the Object constructor.
isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> truefalse when not created by the Object constructor.
isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(null);
//=> false
isPlainObject(Object.create(null));
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 17 | jonschlinkert |
| 6 | stevenvachon |
| 3 | onokumus |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 11, 2017.
The UNIX command rm -rf for node.
Install with npm install rimraf, or just drop rimraf.js somewhere.
rimraf(f, [opts], callback)
The first parameter will be interpreted as a globbing pattern for files. If you want to disable globbing you can do so with opts.disableGlob (defaults to false). This might be handy, for instance, if you have filenames that contain globbing wildcard characters.
The callback will be called with an error if there is one. Certain errors are handled for you:
EBUSY and ENOTEMPTY - rimraf will back off a maximum of opts.maxBusyTries times before giving up, adding 100ms of wait between each attempt. The default maxBusyTries is 3.ENOENT - If the file doesn’t exist, rimraf will return successfully, since your desired outcome is already the case.EMFILE - Since readdir requires opening a file descriptor, it’s possible to hit EMFILE if too many file descriptors are in use. In the sync case, there’s nothing to be done for this. But in the async case, rimraf will gradually back off with timeouts up to opts.emfileWait ms, which defaults to 1000.unlink, chmod, stat, lstat, rmdir, readdir, unlinkSync, chmodSync, statSync, lstatSync, rmdirSync, readdirSync
In order to use a custom file system library, you can override specific fs functions on the options object.
If any of these functions are present on the options object, then the supplied function will be used instead of the default fs method.
Sync methods are only relevant for rimraf.sync(), of course.
For example:
maxBusyTries
If an EBUSY, ENOTEMPTY, or EPERM error code is encountered on Windows systems, then rimraf will retry with a linear backoff wait of 100ms longer on each try. The default maxBusyTries is 3.
Only relevant for async usage.
emfileWait
If an EMFILE error is encountered, then rimraf will retry repeatedly with a linear backoff of 1ms longer on each try, until the timeout counter hits this max. The default limit is 1000.
If you repeatedly encounter EMFILE errors, then consider using graceful-fs in your program.
Only relevant for async usage.
glob
Set to false to disable glob pattern matching.
Set to an object to pass options to the glob module. The default glob options are { nosort: true, silent: true }.
Glob version 6 is used in this module.
Relevant for both sync and async usage.
disableGlob
Set to any non-falsey value to disable globbing entirely. (Equivalent to setting glob: false.)
It can remove stuff synchronously, too. But that’s not so good. Use the async API. It’s better.
If installed with npm install rimraf -g it can be used as a global command rimraf <path> [<path> ...] which is useful for cross platform support.
If you need to create a directory recursively, check out mkdirp.
Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.
(TOC generated by verb using markdown-toc)
Install with npm:
You may also pass an object and property name to check if the property is an accessor:
false when not an object
true when the object has valid properties
and the properties all have the correct JavaScript types:
false when the object has invalid properties
isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> falsefalse when an accessor is not a function
isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> falsefalse when a value is not the correct type
isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> falseInstall dev dependencies:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Jon Schlinkert
This file was generated by verb on December 28, 2015. # node-url
This module has utilities for URL resolution and parsing meant to have feature parity with node.js core url module.
Parsed URL objects have some or all of the following fields, depending on whether or not they exist in the URL string. Any parts that are not in the URL string will not be in the parsed object. Examples are shown for the URL
'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'
href: The full URL that was originally parsed. Both the protocol and host are lowercased.
Example: 'http://user:pass@host.com:8080/p/a/t/h?query=string#hash'
protocol: The request protocol, lowercased.
Example: 'http:'
host: The full lowercased host portion of the URL, including port information.
Example: 'host.com:8080'
auth: The authentication information portion of a URL.
Example: 'user:pass'
hostname: Just the lowercased hostname portion of the host.
Example: 'host.com'
port: The port number portion of the host.
Example: '8080'
pathname: The path section of the URL, that comes after the host and before the query, including the initial slash if present.
Example: '/p/a/t/h'
search: The ‘query string’ portion of the URL, including the leading question mark.
Example: '?query=string'
path: Concatenation of pathname and search.
Example: '/p/a/t/h?query=string'
query: Either the ‘params’ portion of the query string, or a querystring-parsed object.
Example: 'query=string' or {'query':'string'}
hash: The ‘fragment’ portion of the URL including the pound-sign.
Example: '#hash'
The following methods are provided by the URL module:
Take a URL string, and return an object.
Pass true as the second argument to also parse the query string using the querystring module. Defaults to false.
Pass true as the third argument to treat //foo/bar as { host: 'foo', pathname: '/bar' } rather than { pathname: '//foo/bar' }. Defaults to false.
Take a parsed URL object, and return a formatted URL string.
href will be ignored.protocol is treated the same with or without the trailing : (colon).
http, https, ftp, gopher, file will be postfixed with :// (colon-slash-slash).mailto, xmpp, aim, sftp, foo, etc will be postfixed with : (colon)auth will be used if present.hostname will only be used if host is absent.port will only be used if host is absent.host will be used in place of hostname and portpathname is treated the same with or without the leading / (slash)search will be used in place of queryquery (object; see querystring) will only be used if search is absent.search is treated the same with or without the leading ? (question mark)hash is treated the same with or without the leading # (pound sign, anchor)Take a base URL, and a href URL, and resolve them as a browser would for an anchor tag. Examples:
url.resolve('/one/two/three', 'four') // '/one/two/four'
url.resolve('http://example.com/', '/one') // 'http://example.com/one'
url.resolve('http://example.com/one', '/two') // 'http://example.com/two'
Reuse your objects and functions for maximum speed. This technique will make any function run ~10% faster. You call your functions a lot, and it adds up quickly in hot code paths.
$ node benchmarks/createNoCodeFunction.js
Total time 53133
Total iterations 100000000
Iteration/s 1882069.5236482036
$ node benchmarks/reuseNoCodeFunction.js
Total time 50617
Total iterations 100000000
Iteration/s 1975620.838848608
The above benchmark uses fibonacci to simulate a real high-cpu load. The actual numbers might differ for your use case, but the difference should not.
The benchmark was taken using Node v6.10.0.
This library was extracted from fastparallel.
var reusify = require('reusify')
var fib = require('reusify/benchmarks/fib')
var instance = reusify(MyObject)
// get an object from the cache,
// or creates a new one when cache is empty
var obj = instance.get()
// set the state
obj.num = 100
obj.func()
// reset the state.
// if the state contains any external object
// do not use delete operator (it is slow)
// prefer set them to null
obj.num = 0
// store an object in the cache
instance.release(obj)
function MyObject () {
// you need to define this property
// so V8 can compile MyObject into an
// hidden class
this.next = null
this.num = 0
var that = this
// this function is never reallocated,
// so it can be optimized by V8
this.func = function () {
if (null) {
// do nothing
} else {
// calculates fibonacci
fib(that.num)
}
}
}The above example was intended for synchronous code, let’s see async:
var reusify = require('reusify')
var instance = reusify(MyObject)
for (var i = 0; i < 100; i++) {
getData(i, console.log)
}
function getData (value, cb) {
var obj = instance.get()
obj.value = value
obj.cb = cb
obj.run()
}
function MyObject () {
this.next = null
this.value = null
var that = this
this.run = function () {
asyncOperation(that.value, that.handle)
}
this.handle = function (err, result) {
that.cb(err, result)
that.value = null
that.cb = null
instance.release(that)
}
}Also note how in the above examples, the code, that consumes an istance of MyObject, reset the state to initial condition, just before storing it in the cache. That’s needed so that every subsequent request for an instance from the cache, could get a clean instance.
It is faster because V8 doesn’t have to collect all the functions you create. On a short-lived benchmark, it is as fast as creating the nested function, but on a longer time frame it creates less pressure on the garbage collector.
If you want to see some complex example, checkout middie and steed.
Thanks to Trevor Norris for getting me down the rabbit hole of performance, and thanks to Mathias Buss for suggesting me to share this trick.
The ultimate javascript content-type utility.
Similar to the mime@1.x module, except:
mime-types simply returns false, so do var type = mime.lookup('unrecognized') || 'application/octet-stream'.new Mime() business, so you could do var lookup = require('mime-types').lookup..define() functionality.lookup(path)Otherwise, the API is compatible with mime 1.x.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
All mime types are based on mime-db, so open a PR there if you’d like to add mime types.
All functions return false if input is invalid or not found.
Lookup the content-type associated with a file.
mime.lookup('json') // 'application/json'
mime.lookup('.md') // 'text/markdown'
mime.lookup('file.html') // 'text/html'
mime.lookup('folder/file.js') // 'application/javascript'
mime.lookup('folder/.htaccess') // false
mime.lookup('cats') // falseCreate a full content-type header given a content-type or extension. When given an extension, mime.lookup is used to get the matching content-type, otherwise the given content-type is used. Then if the content-type does not already have a charset parameter, mime.charset is used to get the default charset and add to the returned content-type.
mime.contentType('markdown') // 'text/x-markdown; charset=utf-8'
mime.contentType('file.json') // 'application/json; charset=utf-8'
mime.contentType('text/html') // 'text/html; charset=utf-8'
mime.contentType('text/html; charset=iso-8859-1') // 'text/html; charset=iso-8859-1'
// from a full path
mime.contentType(path.extname('/path/to/file.json')) // 'application/json; charset=utf-8'Get the default extension for a content-type.
Lookup the implied default charset of a content-type.
A map of content-types by extension.
A map of extensions by content-type.
An Object.assign shim. Invoke its “shim” method to shim Object.assign if it is unavailable.
This package implements the es-shim API interface. It works in an ES3-supported environment and complies with the spec. In an ES6 environment, it will also work properly with Symbols.
Takes a minimum of 2 arguments: target and source. Takes a variable sized list of source arguments - at least 1, as many as you want. Throws a TypeError if the target argument is null or undefined.
Most common usage:
var assign = require('object.assign').getPolyfill(); // returns native method if compliant
/* or */
var assign = require('object.assign/polyfill')(); // returns native method if compliantvar assert = require('assert');
// Multiple sources!
var target = { a: true };
var source1 = { b: true };
var source2 = { c: true };
var sourceN = { n: true };
var expected = {
a: true,
b: true,
c: true,
n: true
};
assign(target, source1, source2, sourceN);
assert.deepEqual(target, expected); // AWESOME!var target = {
a: true,
b: true,
c: true
};
var source1 = {
c: false,
d: false
};
var sourceN = {
e: false
};
var assigned = assign(target, source1, sourceN);
assert.equal(target, assigned); // returns the target object
assert.deepEqual(assigned, {
a: true,
b: true,
c: false,
d: false,
e: false
});/* when Object.assign is not present */
delete Object.assign;
var shimmedAssign = require('object.assign').shim();
/* or */
var shimmedAssign = require('object.assign/shim')();
assert.equal(shimmedAssign, assign);
var target = {
a: true,
b: true,
c: true
};
var source = {
c: false,
d: false,
e: false
};
var assigned = assign(target, source);
assert.deepEqual(Object.assign(target, source), assign(target, source));/* when Object.assign is present */
var shimmedAssign = require('object.assign').shim();
assert.equal(shimmedAssign, Object.assign);
var target = {
a: true,
b: true,
c: true
};
var source = {
c: false,
d: false,
e: false
};
assert.deepEqual(Object.assign(target, source), assign(target, source));Simply clone the repo, npm install, and run npm test
Install with npm:
var union = require('union-value');
var obj = {};
union(obj, 'a.b.c', ['one', 'two']);
union(obj, 'a.b.c', ['three']);
console.log(obj);
//=> {a: {b: {c: [ 'one', 'two', 'three' ] }}}a.b.c) to get a nested value from an object. | homepage'a.b.c') paths. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.4.2, on February 25, 2017. # safe-regex
Detect potentially catastrophic exponential-time regular expressions by limiting the star height to 1.
WARNING: This module has both false positives and false negatives. Use vuln-regex-detector for improved accuracy.
Suppose you have a script named safe.js:
var safe = require('safe-regex');
var regex = process.argv.slice(2).join(' ');
console.log(safe(regex));This is its behavior:
$ node safe.js '(x+x+)+y'
false
$ node safe.js '(beep|boop)*'
true
$ node safe.js '(a+){10}'
false
$ node safe.js '\blocation\s*:[^:\n]+\b(Oakland|San Francisco)\b'
true
Return a boolean ok whether or not the regex re is safe and not possibly catastrophic.
re can be a RegExp object or just a string.
If the re is a string and is an invalid regex, returns false.
opts.limit - maximum number of allowed repetitions in the entire regex. Default: 25.With npm do:
npm install safe-regex
The following documents may be edifying:
This project follows Semantic Versioning 2.0 (semver).
Here are the project-specific meanings of MAJOR, MINOR, and PATCH updates:
Define a non-enumerable property on an object.
Install with npm:
Install with yarn:
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017. # define-property
Define a non-enumerable property on an object.
Install with npm:
Install with yarn:
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017. # define-property
Define a non-enumerable property on an object.
Install with npm:
Install with yarn:
Params
obj: The object on which to define the property.prop: The name of the property to be defined or modified.descriptor: The descriptor for the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'get/set
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017. # assert
This module is used for writing unit tests for your applications, you can access it with require('assert').
It aims to be fully compatibe with the node.js assert module, same API and same behavior, just adding support for web browsers. The API and code may contain traces of the CommonJS Unit Testing 1.0 spec which they were based on, but both have evolved significantly since then.
A strict and a legacy mode exist, while it is recommended to only use strict mode.
When using the strict mode, any assert function will use the equality used in the strict function mode. So assert.deepEqual() will, for example, work the same as assert.deepStrictEqual().
It can be accessed using:
Deprecated: Use strict mode instead.
When accessing assert directly instead of using the strict property, the Abstract Equality Comparison will be used for any function without a “strict” in its name (e.g. assert.deepEqual()).
It can be accessed using:
It is recommended to use the strict mode instead as the Abstract Equality Comparison can often have surprising results. Especially in case of assert.deepEqual() as the used comparison rules there are very lax.
E.g.
Throws an exception that displays the values for actual and expected separated by the provided operator.
Tests if value is truthy, it is equivalent to assert.equal(true, !!value, message);
Tests shallow, coercive equality with the equal comparison operator ( == ).
Tests shallow, coercive non-equality with the not equal comparison operator ( != ).
Tests for deep equality.
Tests for deep equality, as determined by the strict equality operator ( === )
Tests for any deep inequality.
Tests strict equality, as determined by the strict equality operator ( === )
Tests strict non-equality, as determined by the strict not equal operator ( !== )
Expects block to throw an error. error can be constructor, regexp or validation function.
Validate instanceof using constructor:
Validate error message using RegExp:
Custom error validation:
assert.throws(function() {
throw new Error("Wrong value");
}, function(err) {
if ( (err instanceof Error) && /value/.test(err) ) {
return true;
}
}, "unexpected error");Expects block not to throw an error, see assert.throws for details.
Tests if value is not a false value, throws if it is a true value. Useful when testing the first argument, error in callbacks.
Merge multiple streams into one stream in sequence or parallel.
Install with npm
const gulp = require('gulp')
const merge2 = require('merge2')
const concat = require('gulp-concat')
const minifyHtml = require('gulp-minify-html')
const ngtemplate = require('gulp-ngtemplate')
gulp.task('app-js', function () {
return merge2(
gulp.src('static/src/tpl/*.html')
.pipe(minifyHtml({empty: true}))
.pipe(ngtemplate({
module: 'genTemplates',
standalone: true
})
), gulp.src([
'static/src/js/app.js',
'static/src/js/locale_zh-cn.js',
'static/src/js/router.js',
'static/src/js/tools.js',
'static/src/js/services.js',
'static/src/js/filters.js',
'static/src/js/directives.js',
'static/src/js/controllers.js'
])
)
.pipe(concat('app.js'))
.pipe(gulp.dest('static/dist/js/'))
})const stream = merge2([stream1, stream2], stream3, {end: false})
//...
stream.add(stream4, stream5)
//..
stream.end()// equal to merge2([stream1, stream2], stream3)
const stream = merge2()
stream.add([stream1, stream2])
stream.add(stream3)// merge order:
// 1. merge `stream1`;
// 2. merge `stream2` and `stream3` in parallel after `stream1` merged;
// 3. merge 'stream4' after `stream2` and `stream3` merged;
const stream = merge2(stream1, [stream2, stream3], stream4)
// merge order:
// 1. merge `stream5` and `stream6` in parallel after `stream4` merged;
// 2. merge 'stream7' after `stream5` and `stream6` merged;
stream.add([stream5, stream6], stream7)// nest merge
// equal to merge2(stream1, stream2, stream6, stream3, [stream4, stream5]);
const streamA = merge2(stream1, stream2)
const streamB = merge2(stream3, [stream4, stream5])
const stream = merge2(streamA, streamB)
streamA.add(stream6)return a duplex stream (mergedStream). streams in array will be merged in parallel.
return the mergedStream.
It will emit ‘queueDrain’ when all streams merged. If you set end === false in options, this event give you a notice that should add more streams to merge or end the mergedStream.
option Type: Readable or Duplex or Transform stream.
option Type: Object.
end - Boolean - if end === false then mergedStream will not be auto ended, you should end by yourself. Default: undefined
pipeError - Boolean - if pipeError === true then mergedStream will emit error event from source streams. Default: undefined
objectMode - Boolean . Default: true
objectMode and other options(highWaterMark, defaultEncoding …) is same as Node.js Stream.
Deeply mix the properties of objects into the first object. Like merge-deep, but doesn’t clone.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
var mixinDeep = require('mixin-deep');
mixinDeep({a: {aa: 'aa'}}, {a: {bb: 'bb'}}, {a: {cc: 'cc'}});
//=> { a: { aa: 'aa', bb: 'bb', cc: 'cc' } }Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
extend but recursively copies only the missing properties/values to the target object. | homepageJon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on December 09, 2017. # arr-flatten
Recursively flatten an array or arrays.
Install with npm:
Install with bower
var flatten = require('arr-flatten');
flatten(['a', ['b', ['c']], 'd', ['e']]);
//=> ['a', 'b', 'c', 'd', 'e']I wanted the fastest implementation I could find, with implementation choices that should work for 95% of use cases, but no cruft to cover the other 5%.
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 20 | jonschlinkert |
| 1 | lukeed |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 05, 2017. # is-relative
Returns
trueif the path appears to be relative.
Install with npm:
var isRelative = require('is-relative');
console.log(isRelative('README.md'));
//=> true
console.log(isRelative('/User/dev/foo/README.md'));
//=> falsepath.isAbolute. Returns true if a file path is absolute. | homepagetrue if the given string looks like a glob pattern or an extglob pattern… more | homepagetrue if the path appears to be relative. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 13 | jonschlinkert |
| 3 | shinnn |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
In a nutshell:
var parse = require('spdx-expression-parse')
var assert = require('assert')
assert.deepEqual(
parse('BSD-2-Clause'),
)
assert.throws(function () {
// Should be `Apache-2.0`.
parse('Apache 2')
})
assert.deepEqual(
// - LGPL 2.1
{
conjunction: 'or',
right: {
conjunction: 'and',
}
}
)The bulk of the SPDX standard describes syntax and semantics of XML metadata files. This package implements two lightweight, plain-text components of that larger standard:
Encode a URL to a percent-encoded form, excluding already-encoded sequences
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Encode a URL to a percent-encoded form, excluding already-encoded sequences.
This function will take an already-encoded URL and encode all the non-URL code points (as UTF-8 byte sequences). This function will not encode the “%” character unless it is not part of a valid sequence (%20 will be left as-is, but %foo will be encoded as %25foo).
This encode is meant to be “safe” and does not throw errors. It will try as hard as it can to properly encode the given URL, including replacing any raw, unpaired surrogate pairs with the Unicode replacement character prior to encoding.
This function is similar to the intrinsic function encodeURI, except it will not encode the % character if that is part of a valid sequence, will not encode [ and ] (for IPv6 hostnames) and will replace raw, unpaired surrogate pairs with the Unicode replacement character (instead of throwing).
var encodeUrl = require('encodeurl')
var escapeHtml = require('escape-html')
http.createServer(function onRequest (req, res) {
// get encoded form of inbound url
var url = encodeUrl(req.url)
// create html message
var body = '<p>Location ' + escapeHtml(url) + ' not found</p>'
// send a 404
res.statusCode = 404
res.setHeader('Content-Type', 'text/html; charset=UTF-8')
res.setHeader('Content-Length', String(Buffer.byteLength(body, 'utf-8')))
res.end(body, 'utf-8')
})var encodeUrl = require('encodeurl')
var escapeHtml = require('escape-html')
var url = require('url')
http.createServer(function onRequest (req, res) {
// parse inbound url
var href = url.parse(req)
// set new host for redirect
href.host = 'localhost'
href.protocol = 'https:'
href.slashes = true
// create location header
var location = encodeUrl(url.format(href))
// create html message
var body = '<p>Redirecting to new site: ' + escapeHtml(location) + '</p>'
// send a 301
res.statusCode = 301
res.setHeader('Content-Type', 'text/html; charset=UTF-8')
res.setHeader('Content-Length', String(Buffer.byteLength(body, 'utf-8')))
res.setHeader('Location', location)
res.end(body, 'utf-8')
})Returns true if the value is an object and not an array or null.
Install with npm:
Install with yarn:
Use is-plain-object if you want only objects that are created by the Object constructor.
Install with npm:
Install with bower
True
All of the following return true:
isObject({});
isObject(Object.create({}));
isObject(Object.create(Object.prototype));
isObject(Object.create(null));
isObject({});
isObject(new Foo);
isObject(/foo/);False
All of the following return false:
isObject();
isObject(function () {});
isObject(1);
isObject([]);
isObject(undefined);
isObject(null);Object constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 29 | jonschlinkert |
| 4 | doowb |
| 1 | magnudae |
| 1 | LeSuisse |
| 1 | tmcw |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on June 30, 2017. # isobject
Returns true if the value is an object and not an array or null.
Install with npm:
Install with yarn:
Use is-plain-object if you want only objects that are created by the Object constructor.
Install with npm:
Install with bower
True
All of the following return true:
isObject({});
isObject(Object.create({}));
isObject(Object.create(Object.prototype));
isObject(Object.create(null));
isObject({});
isObject(new Foo);
isObject(/foo/);False
All of the following return false:
isObject();
isObject(function () {});
isObject(1);
isObject([]);
isObject(undefined);
isObject(null);Object constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 29 | jonschlinkert |
| 4 | doowb |
| 1 | magnudae |
| 1 | LeSuisse |
| 1 | tmcw |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on June 30, 2017. 
consolidates all data structures of @datastructures-js into a single repository. Data structures are distributed into their own repositories for easier maintenance and usability so that they can be installed and imported individually in the code.
// import your required classes
const {
Queue,
Stack,
Set: EnhancedSet, // renamed to avoid conflict with es6 Set
LinkedList,
DoublyLinkedList,
MinHeap,
MaxHeap,
MinPriorityQueue,
MaxPriorityQueue,
Graph,
DirectedGraph,
BinarySearchTree,
AvlTree,
Trie
} = require('datastructures-js');// import your required classes
import {
Queue,
PriorityQueue,
Stack,
Set as EnhancedSet, // renamed to avoid conflict with es6 Set
LinkedList,
DoublyLinkedList,
MinHeap,
MaxHeap,
MinPriorityQueue,
MaxPriorityQueue,
Graph,
DirectedGraph,
BinarySearchTree,
AvlTree,
Trie
} from 'datastructures-js';There are sometimes domain-specific use cases for data structures that require either a tweak or additional functionality. Data structures here are implemented as a base general purpose classes in ES6. You can always use any of these classes to override or extend the functionality in your own code.
const { Graph } = require('datastructures-js'); // OR require('@datastructures-js/graph')
class BusStationsGraph extends Graph {
findShortestPath(srcStationId, destStationId) {
// benefit from Graph to implement your own code
}
}https://github.com/datastructures-js/queue
https://github.com/datastructures-js/stack
https://github.com/datastructures-js/set
https://github.com/datastructures-js/linked-list
https://github.com/datastructures-js/linked-list
https://github.com/datastructures-js/heap
https://github.com/datastructures-js/heap
https://github.com/datastructures-js/priority-queue
https://github.com/datastructures-js/priority-queue
https://github.com/datastructures-js/graph
https://github.com/datastructures-js/graph
https://github.com/datastructures-js/binary-search-tree
https://github.com/datastructures-js/binary-search-tree
https://github.com/datastructures-js/trie
grunt build
Node.js function to invoke as the final step to respond to HTTP request.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Returns function to be invoked as the final step for the given req and res. This function is to be invoked as fn(err). If err is falsy, the handler will write out a 404 response to the res. If it is truthy, an error response will be written out to the res.
When an error is written, the following information is added to the response:
res.statusCode is set from err.status (or err.statusCode). If this value is outside the 4xx or 5xx range, it will be set to 500.res.statusMessage is set according to the status code.env is 'production', otherwise will be err.stack.err.headers object.The final handler will also unpipe anything from req when it is invoked.
By default, the environment is determined by NODE_ENV variable, but it can be overridden by this option.
Provide a function to be called with the err when it exists. Can be used for writing errors to a central location without excessive function generation. Called as onerror(err, req, res).
var finalhandler = require('finalhandler')
var http = require('http')
var server = http.createServer(function (req, res) {
var done = finalhandler(req, res)
done()
})
server.listen(3000)var finalhandler = require('finalhandler')
var fs = require('fs')
var http = require('http')
var server = http.createServer(function (req, res) {
var done = finalhandler(req, res)
fs.readFile('index.html', function (err, buf) {
if (err) return done(err)
res.setHeader('Content-Type', 'text/html')
res.end(buf)
})
})
server.listen(3000)var finalhandler = require('finalhandler')
var http = require('http')
var serveStatic = require('serve-static')
var serve = serveStatic('public')
var server = http.createServer(function (req, res) {
var done = finalhandler(req, res)
serve(req, res, done)
})
server.listen(3000)var finalhandler = require('finalhandler')
var fs = require('fs')
var http = require('http')
var server = http.createServer(function (req, res) {
var done = finalhandler(req, res, { onerror: logerror })
fs.readFile('index.html', function (err, buf) {
if (err) return done(err)
res.setHeader('Content-Type', 'text/html')
res.end(buf)
})
})
server.listen(3000)
function logerror (err) {
console.error(err.stack || err.toString())
}Remove duplicate values from an array. Fastest ES5 implementation.
Install with npm:
var unique = require('array-unique');
var arr = ['a', 'b', 'c', 'c'];
console.log(unique(arr)) //=> ['a', 'b', 'c']
console.log(arr) //=> ['a', 'b', 'c']
/* The above modifies the input array. To prevent that at a slight performance cost: */
var unique = require("array-unique").immutable;
var arr = ['a', 'b', 'c', 'c'];
console.log(unique(arr)) //=> ['a', 'b', 'c']
console.log(arr) //=> ['a', 'b', 'c', 'c']Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.1.28, on July 31, 2016. anymatch
====== Javascript module to match a string against a regular expression, glob, string, or function that takes the string as an argument and returns a truthy or falsy value. The matcher can also be an array of any or all of these. Useful for allowing a very flexible user-defined config to define things like file paths.
Note: This module has Bash-parity, please be aware that Windows-style backslashes are not supported as separators. See https://github.com/micromatch/micromatch#backslashes for more information.
testString for non-function matchers, while the entire array will be applied as the arguments for function matchers.const anymatch = require('anymatch');
const matchers = [ 'path/to/file.js', 'path/anyjs/**/*.js', /foo.js$/, string => string.includes('bar') && string.length > 10 ] ;
anymatch(matchers, 'path/to/file.js'); // true
anymatch(matchers, 'path/anyjs/baz.js'); // true
anymatch(matchers, 'path/to/foo.js'); // true
anymatch(matchers, 'path/to/bar.js'); // true
anymatch(matchers, 'bar.js'); // false
// returnIndex = true
anymatch(matchers, 'foo.js', {returnIndex: true}); // 2
anymatch(matchers, 'path/anyjs/foo.js', {returnIndex: true}); // 1
// any picomatc
// using globs to match directories and their children
anymatch('node_modules', 'node_modules'); // true
anymatch('node_modules', 'node_modules/somelib/index.js'); // false
anymatch('node_modules/**', 'node_modules/somelib/index.js'); // true
anymatch('node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // false
anymatch('**/node_modules/**', '/absolute/path/to/node_modules/somelib/index.js'); // true
const matcher = anymatch(matchers);
['foo.js', 'bar.js'].filter(matcher); // [ 'foo.js' ]
anymatch master* ❯You can also pass in only your matcher(s) to get a curried function that has already been bound to the provided matching criteria. This can be used as an Array#filter callback.
var matcher = anymatch(matchers);
matcher('path/to/file.js'); // true
matcher('path/anyjs/baz.js', true); // 1
['foo.js', 'bar.js'].filter(matcher); // ['foo.js']See release notes page on GitHub
startIndex and endIndex arguments. Node 8.x-only.Returns true if a filepath is a windows UNC file path.
Install with npm:
true
Returns true for windows UNC paths:
isUncPath('\\/foo/bar');
isUncPath('\\\\foo/bar');
isUncPath('\\\\foo\\admin$');
isUncPath('\\\\foo\\admin$\\system32');
isUncPath('\\\\foo\\temp');
isUncPath('\\\\/foo/bar');
isUncPath('\\\\\\/foo/bar');false
Returns false for non-UNC paths:
isUncPath('/foo/bar');
isUncPath('/');
isUncPath('/foo');
isUncPath('/foo/');
isUncPath('c:');
isUncPath('c:.');
isUncPath('c:./');
isUncPath('c:./file');
isUncPath('c:/');
isUncPath('c:/file');Customization
Use .source to use the regex as a component of another regex:
\\).D:)Changes
TypeError if value is not a stringpath.isAbolute. Returns true if a file path is absolute. | homepagetrue if the given string looks like a glob pattern or an extglob pattern… more | homepagetrue if the path appears to be relative. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 13, 2017. # has-values
Returns true if any values exist, false if empty. Works for booleans, functions, numbers, strings, nulls, objects and arrays.
Install with npm:
var hasValue = require('has-values');
hasValue('a');
//=> true
hasValue('');
//=> false
hasValue(1);
//=> true
hasValue(0);
//=> false
hasValue({a: 'a'}});
//=> true
hasValue({});
hasValue({foo: undefined});
//=> false
hasValue({foo: null});
//=> true
hasValue(['a']);
//=> true
hasValue([]);
hasValue([[], []]);
hasValue([[[]]]);
//=> false
hasValue(['foo']);
hasValue([0]);
//=> true
hasValue(function(foo) {});
//=> true
hasValue(function() {});
//=> true
hasValue(true);
//=> true
hasValue(false);
//=> trueTo test for empty values, do:
zero always returns truearray now recurses, so that an array of empty arrays will return falsenull now returns trueObject constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on May 19, 2017. # brace-expansion
var expand = require('brace-expansion');
expand('file-{a,b,c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
expand('-v{,,}')
// => ['-v', '-v', '-v']
expand('file{0..2}.jpg')
// => ['file0.jpg', 'file1.jpg', 'file2.jpg']
expand('file-{a..c}.jpg')
// => ['file-a.jpg', 'file-b.jpg', 'file-c.jpg']
expand('file{2..0}.jpg')
// => ['file2.jpg', 'file1.jpg', 'file0.jpg']
expand('file{0..4..2}.jpg')
// => ['file0.jpg', 'file2.jpg', 'file4.jpg']
expand('file-{a..e..2}.jpg')
// => ['file-a.jpg', 'file-c.jpg', 'file-e.jpg']
expand('file{00..10..5}.jpg')
// => ['file00.jpg', 'file05.jpg', 'file10.jpg']
expand('{{A..C},{a..c}}')
// => ['A', 'B', 'C', 'a', 'b', 'c']
expand('ppp{,config,oe{,conf}}')
// => ['ppp', 'pppconfig', 'pppoe', 'pppoeconf']Return an array of all possible and valid expansions of str. If none are found, [str] is returned.
Valid expansions are:
A comma separated list of options, like {a,b} or {a,{b,c}} or {,a,}.
A numeric sequence from x to y inclusive, with optional increment. If x or y start with a leading 0, all the numbers will be padded to have equal length. Negative numbers and backwards iteration work too.
An alphabetic sequence from x to y inclusive, with optional increment. x and y must be exactly one character, and if given, incr must be a number.
For compatibility reasons, the string ${ is not eligible for brace expansion.
With npm do:
This module is proudly supported by my Sponsors!
Do you want to support modules like this to improve their quality, stability and weigh in on new features? Then please consider donating to my Patreon. Not sure how much of my modules you’re using? Try feross/thanks!
This is a database of all mime types. It consists of a single, public JSON file and does not include any logic, allowing it to remain as un-opinionated as possible with an API. It aggregates data from the following sources:
If you’re crazy enough to use this in the browser, you can just grab the JSON file using jsDelivr. It is recommended to replace master with a release tag as the JSON format may change in the future.
https://cdn.jsdelivr.net/gh/jshttp/mime-db@master/db.json
The JSON file is a map lookup for lowercased mime types. Each mime type has the following properties:
.source - where the mime type is defined. If not set, it’s probably a custom media type.
apache - Apache common media typesiana - IANA-defined media typesnginx - nginx media types.extensions[] - known extensions associated with this mime type..compressible - whether a file of this type can be gzipped..charset - the default charset associated with this type, if any.If unknown, every property could be undefined.
To edit the database, only make PRs against src/custom.json or src/custom-suffix.json.
The src/custom.json file is a JSON object with the MIME type as the keys and the values being an object with the following keys:
compressible - leave out if you don’t know, otherwise true/false to indicate whether the data represented by the type is typically compressible.extensions - include an array of file extensions that are associated with the type.notes - human-readable notes about the type, typically what the type is.sources - include an array of URLs of where the MIME type and the associated extensions are sourced from. This needs to be a primary source; links to type aggregating sites and Wikipedia are not acceptable.To update the build, run npm run build.
The best way to get new media types included in this library is to register them with the IANA. The community registration procedure is outlined in RFC 6838 section 5. Types registered with the IANA are automatically pulled into this library.
If that is not possible / feasible, they can be added directly here as a “custom” type. To do this, it is required to have a primary source that definitively lists the media type. If an extension is going to be listed as associateed with this media type, the source must definitively link the media type and extension as well.
This is a database of all mime types. It consists of a single, public JSON file and does not include any logic, allowing it to remain as un-opinionated as possible with an API. It aggregates data from the following sources:
If you’re crazy enough to use this in the browser, you can just grab the JSON file using jsDelivr. It is recommended to replace master with a release tag as the JSON format may change in the future.
https://cdn.jsdelivr.net/gh/jshttp/mime-db@master/db.json
The JSON file is a map lookup for lowercased mime types. Each mime type has the following properties:
.source - where the mime type is defined. If not set, it’s probably a custom media type.
apache - Apache common media typesiana - IANA-defined media typesnginx - nginx media types.extensions[] - known extensions associated with this mime type..compressible - whether a file of this type can be gzipped..charset - the default charset associated with this type, if any.If unknown, every property could be undefined.
To edit the database, only make PRs against src/custom.json or src/custom-suffix.json.
The src/custom.json file is a JSON object with the MIME type as the keys and the values being an object with the following keys:
compressible - leave out if you don’t know, otherwise true/false to indicate whether the data represented by the type is typically compressible.extensions - include an array of file extensions that are associated with the type.notes - human-readable notes about the type, typically what the type is.sources - include an array of URLs of where the MIME type and the associated extensions are sourced from. This needs to be a primary source; links to type aggregating sites and Wikipedia are not acceptable.To update the build, run npm run build.
The best way to get new media types included in this library is to register them with the IANA. The community registration procedure is outlined in RFC 6838 section 5. Types registered with the IANA are automatically pulled into this library.
If that is not possible / feasible, they can be added directly here as a “custom” type. To do this, it is required to have a primary source that definitively lists the media type. If an extension is going to be listed as associateed with this media type, the source must definitively link the media type and extension as well.
Higher level content negotiation based on negotiator. Extracted from koa for general use.
In addition to negotiator, it allows:
(['text/html', 'application/json']) as well as ('text/html', 'application/json').json.false when no types match*This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Create a new Accepts object for the given req.
Return the first accepted charset. If nothing in charsets is accepted, then false is returned.
Return the charsets that the request accepts, in the order of the client’s preference (most preferred first).
Return the first accepted encoding. If nothing in encodings is accepted, then false is returned.
Return the encodings that the request accepts, in the order of the client’s preference (most preferred first).
Return the first accepted language. If nothing in languages is accepted, then false is returned.
Return the languages that the request accepts, in the order of the client’s preference (most preferred first).
Return the first accepted type (and it is returned as the same text as what appears in the types array). If nothing in types is accepted, then false is returned.
The types array can contain full MIME types or file extensions. Any value that is not a full MIME types is passed to require('mime-types').lookup.
Return the types that the request accepts, in the order of the client’s preference (most preferred first).
This simple example shows how to use accepts to return a different typed respond body based on what the client wants to accept. The server lists it’s preferences in order and will get back the best match between the client and server.
var accepts = require('accepts')
var http = require('http')
function app (req, res) {
var accept = accepts(req)
// the order of this list is significant; should be server preferred order
switch (accept.type(['json', 'html'])) {
case 'json':
res.setHeader('Content-Type', 'application/json')
res.write('{"hello":"world!"}')
break
case 'html':
res.setHeader('Content-Type', 'text/html')
res.write('<b>hello, world!</b>')
break
default:
// the fallback is text/plain, so no need to specify it above
res.setHeader('Content-Type', 'text/plain')
res.write('hello, world!')
break
}
res.end()
}
http.createServer(app).listen(3000)You can test this out with the cURL program:
Parse a URL with memoization.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Parse the URL of the given request object (looks at the req.url property) and return the result. The result is the same as url.parse in Node.js core. Calling this function multiple times on the same req where req.url does not change will return a cached parsed object, rather than parsing again.
Parse the original URL of the given request object and return the result. This works by trying to parse req.originalUrl if it is a string, otherwise parses req.url. The result is the same as url.parse in Node.js core. Calling this function multiple times on the same req where req.originalUrl does not change will return a cached parsed object, rather than parsing again.
$ npm run-script bench
> parseurl@1.3.3 bench nodejs-parseurl
> node benchmark/index.js
http_parser@2.8.0
node@10.6.0
v8@6.7.288.46-node.13
uv@1.21.0
zlib@1.2.11
ares@1.14.0
modules@64
nghttp2@1.32.0
napi@3
openssl@1.1.0h
icu@61.1
unicode@10.0
cldr@33.0
tz@2018c
> node benchmark/fullurl.js
Parsing URL "http://localhost:8888/foo/bar?user=tj&pet=fluffy"
4 tests completed.
fasturl x 2,207,842 ops/sec ±3.76% (184 runs sampled)
nativeurl - legacy x 507,180 ops/sec ±0.82% (191 runs sampled)
nativeurl - whatwg x 290,044 ops/sec ±1.96% (189 runs sampled)
parseurl x 488,907 ops/sec ±2.13% (192 runs sampled)
> node benchmark/pathquery.js
Parsing URL "/foo/bar?user=tj&pet=fluffy"
4 tests completed.
fasturl x 3,812,564 ops/sec ±3.15% (188 runs sampled)
nativeurl - legacy x 2,651,631 ops/sec ±1.68% (189 runs sampled)
nativeurl - whatwg x 161,837 ops/sec ±2.26% (189 runs sampled)
parseurl x 4,166,338 ops/sec ±2.23% (184 runs sampled)
> node benchmark/samerequest.js
Parsing URL "/foo/bar?user=tj&pet=fluffy" on same request object
4 tests completed.
fasturl x 3,821,651 ops/sec ±2.42% (185 runs sampled)
nativeurl - legacy x 2,651,162 ops/sec ±1.90% (187 runs sampled)
nativeurl - whatwg x 175,166 ops/sec ±1.44% (188 runs sampled)
parseurl x 14,912,606 ops/sec ±3.59% (183 runs sampled)
> node benchmark/simplepath.js
Parsing URL "/foo/bar"
4 tests completed.
fasturl x 12,421,765 ops/sec ±2.04% (191 runs sampled)
nativeurl - legacy x 7,546,036 ops/sec ±1.41% (188 runs sampled)
nativeurl - whatwg x 198,843 ops/sec ±1.83% (189 runs sampled)
parseurl x 24,244,006 ops/sec ±0.51% (194 runs sampled)
> node benchmark/slash.js
Parsing URL "/"
4 tests completed.
fasturl x 17,159,456 ops/sec ±3.25% (188 runs sampled)
nativeurl - legacy x 11,635,097 ops/sec ±3.79% (184 runs sampled)
nativeurl - whatwg x 240,693 ops/sec ±0.83% (189 runs sampled)
parseurl x 42,279,067 ops/sec ±0.55% (190 runs sampled)Returns true if a value is a plain object, array or function.
Install with npm:
Returns true if the value is any of the following:
All objects in JavaScript can have keys, but it’s a pain to check for this, since we ether need to verify that the value is not null or undefined and:
Also note that an extendable object is not the same as an extensible object, which is one that (in es6) is not sealed, frozen, or marked as non-extensible using preventExtensions.
Breaking changes
Object constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 20, 2017. # is-extendable
Returns true if a value is a plain object, array or function.
Install with npm:
Returns true if the value is any of the following:
All objects in JavaScript can have keys, but it’s a pain to check for this, since we ether need to verify that the value is not null or undefined and:
Also note that an extendable object is not the same as an extensible object, which is one that (in es6) is not sealed, frozen, or marked as non-extensible using preventExtensions.
Breaking changes
Object constructor. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 20, 2017. # node-error-ex
> Easily subclass and customize new Error types
To include in your project:
To create an error message type with a specific name (note, that ErrorFn.name will not reflect this):
var JSONError = errorEx('JSONError');
var err = new JSONError('error');
err.name; //-> JSONError
throw err; //-> JSONError: errorTo add a stack line:
var JSONError = errorEx('JSONError', {fileName: errorEx.line('in %s')});
var err = new JSONError('error')
err.fileName = '/a/b/c/foo.json';
throw err; //-> (line 2)-> in /a/b/c/foo.jsonTo append to the error message:
var JSONError = errorEx('JSONError', {fileName: errorEx.append('in %s')});
var err = new JSONError('error');
err.fileName = '/a/b/c/foo.json';
throw err; //-> JSONError: error in /a/b/c/foo.jsonerrorEx([name], [properties])Creates a new ErrorEx error type
name: the name of the new type (appears in the error message upon throw; defaults to Error.name)properties: if supplied, used as a key/value dictionary of properties to use when building up the stack message. Keys are property names that are looked up on the error message, and then passed to function values.
line: if specified and is a function, return value is added as a stack entry (error-ex will indent for you). Passed the property value given the key.stack: if specified and is a function, passed the value of the property using the key, and the raw stack lines as a second argument. Takes no return value (but the stack can be modified directly).message: if specified and is a function, return value is used as new .message value upon get. Passed the property value of the property named by key, and the existing message is passed as the second argument as an array of lines (suitable for multi-line messages).Returns a constructor (Function) that can be used just like the regular Error constructor.
var errorEx = require('error-ex');
var BasicError = errorEx();
var NamedError = errorEx('NamedError');
// --
var AdvancedError = errorEx('AdvancedError', {
foo: {
line: function (value, stack) {
if (value) {
return 'bar ' + value;
}
return null;
}
}
}
var err = new AdvancedError('hello, world');
err.foo = 'baz';
throw err;
/*
AdvancedError: hello, world
bar baz
at tryReadme() (readme.js:20:1)
*/errorEx.line(str)Creates a stack line using a delimiter
This is a helper function. It is to be used in lieu of writing a value object for
propertiesvalues.
str: The string to create
%s to specify where in the string the value should govar errorEx = require('error-ex');
var FileError = errorEx('FileError', {fileName: errorEx.line('in %s')});
var err = new FileError('problem reading file');
err.fileName = '/a/b/c/d/foo.js';
throw err;
/*
FileError: problem reading file
in /a/b/c/d/foo.js
at tryReadme() (readme.js:7:1)
*/errorEx.append(str)Appends to the error.message string
This is a helper function. It is to be used in lieu of writing a value object for
propertiesvalues.
str: The string to append
%s to specify where in the string the value should govar errorEx = require('error-ex');
var SyntaxError = errorEx('SyntaxError', {fileName: errorEx.append('in %s')});
var err = new SyntaxError('improper indentation');
err.fileName = '/a/b/c/d/foo.js';
throw err;
/*
SyntaxError: improper indentation in /a/b/c/d/foo.js
at tryReadme() (readme.js:7:1)
*/Get details about the current Continuous Integration environment.
Please open an issue if your CI server isn’t properly detected :)
var ci = require('ci-info')
if (ci.isCI) {
console.log('The name of the CI server is:', ci.name)
} else {
console.log('This program is not running on a CI server')
}Officially supported CI servers:
| Name | Constant | isPR |
|---|---|---|
| AWS CodeBuild | ci.CODEBUILD |
🚫 |
| AppVeyor | ci.APPVEYOR |
✅ |
| Azure Pipelines | ci.AZURE_PIPELINES |
✅ |
ci.BAMBOO |
🚫 | |
| Bitbucket Pipelines | ci.BITBUCKET |
✅ |
| Bitrise | ci.BITRISE |
✅ |
| Buddy | ci.BUDDY |
✅ |
| Buildkite | ci.BUILDKITE |
✅ |
| CircleCI | ci.CIRCLE |
✅ |
| Cirrus CI | ci.CIRRUS |
✅ |
| Codeship | ci.CODESHIP |
🚫 |
| Drone | ci.DRONE |
✅ |
| dsari | ci.DSARI |
🚫 |
| GitLab CI | ci.GITLAB |
🚫 |
| GoCD | ci.GOCD |
🚫 |
| Hudson | ci.HUDSON |
🚫 |
| Jenkins CI | ci.JENKINS |
✅ |
| Magnum CI | ci.MAGNUM |
🚫 |
| Netlify CI | ci.NETLIFY |
✅ |
| Sail CI | ci.SAIL |
✅ |
| Semaphore | ci.SEMAPHORE |
✅ |
| Shippable | ci.SHIPPABLE |
✅ |
| Solano CI | ci.SOLANO |
✅ |
| Strider CD | ci.STRIDER |
🚫 |
| TaskCluster | ci.TASKCLUSTER |
🚫 |
| TeamCity by JetBrains | ci.TEAMCITY |
🚫 |
| Travis CI | ci.TRAVIS |
✅ |
ci.nameReturns a string containing name of the CI server the code is running on. If CI server is not detected, it returns null.
Don’t depend on the value of this string not to change for a specific vendor. If you find your self writing ci.name === 'Travis CI', you most likely want to use ci.TRAVIS instead.
ci.isCIReturns a boolean. Will be true if the code is running on a CI server, otherwise false.
Some CI servers not listed here might still trigger the ci.isCI boolean to be set to true if they use certain vendor neutral environment variables. In those cases ci.name will be null and no vendor specific boolean will be set to true.
ci.isPRReturns a boolean if PR detection is supported for the current CI server. Will be true if a PR is being tested, otherwise false. If PR detection is not supported for the current CI server, the value will be null.
ci.<VENDOR-CONSTANT>A vendor specific boolean constant is exposed for each support CI vendor. A constant will be true if the code is determined to run on the given CI server, otherwise false.
Examples of vendor constants are ci.TRAVIS or ci.APPVEYOR. For a complete list, see the support table above.
Deprecated vendor constants that will be removed in the next major release:
ci.TDDIUM (Solano CI) This have been renamed ci.SOLANOThe node core libs for in-browser usage.
NOTE: This library is deprecated and won’t accept Pull Requests that include Breaking Changes or new Features. Only bugfixes are accepted.
Exports a hash object of absolute paths to each lib, keyed by lib names. Modules without browser replacements are null.
Some modules have mocks in the mock directory. These are replacements with minimal functionality.
| lib name | browser implementation | mock implementation |
|---|---|---|
| assert | defunctzombie/commonjs-assert | — |
| buffer | feross/buffer | buffer.js |
| child_process | — | — |
| cluster | — | — |
| console | Raynos/console-browserify | console.js |
| constants | juliangruber/constants-browserify | — |
| crypto | crypto-browserify/crypto-browserify | — |
| dgram | — | — |
| dns | — | dns.js |
| domain | bevry/domain-browser | — |
| events | Gozala/events | — |
| fs | — | — |
| http | jhiesey/stream-http | — |
| https | substack/https-browserify | — |
| module | — | — |
| net | — | net.js |
| os | CoderPuppy/os-browserify | — |
| path | substack/path-browserify | — |
| process | shtylman/node-process | process.js |
| punycode | bestiejs/punycode.js | — |
| querystring | mike-spainhower/querystring | — |
| readline | — | — |
| repl | — | — |
| stream | substack/stream-browserify | — |
| string_decoder | rvagg/string_decoder | — |
| sys | defunctzombie/node-util | — |
| timers | jryans/timers-browserify | — |
| tls | — | tls.js |
| tty | substack/tty-browserify | tty.js |
| url | defunctzombie/node-url | — |
| util | defunctzombie/node-util | — |
| vm | substack/vm-browserify | — |
| zlib | devongovett/browserify-zlib | — |
bufferThe current buffer implementation uses feross/buffer@4.x because feross/buffer@5.x relies on typed arrays. This will be dropped as soon as IE9 is not a typical browser target anymore.
punycodeThe current punycode implementation uses bestiejs/punycode.js@1.x because bestiejs/punycode.js@2.x requires modern JS engines that understand const and let. It will be removed someday since it has already been deprecated from the node API.

JavaScript · TypeScript · Flow · JSX · JSON
CSS · SCSS · Less
HTML · Vue · Angular
GraphQL · Markdown · YAML
Your favorite language?
Prettier is an opinionated code formatter. It enforces a consistent style by parsing your code and re-printing it with its own rules that take the maximum line length into account, wrapping code when necessary.
Prettier can be run in your editor on-save, in a pre-commit hook, or in CI environments to ensure your codebase has a consistent style without devs ever having to post a nit-picky comment on a code review ever again!
Show the world you’re using Prettier →
See CONTRIBUTING.md.
Create simple HTTP ETags
This module generates HTTP ETags (as defined in RFC 7232) for use in HTTP responses.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Generate a strong ETag for the given entity. This should be the complete body of the entity. Strings, Buffers, and fs.Stats are accepted. By default, a strong ETag is generated except for fs.Stats, which will generate a weak ETag (this can be overwritten by options.weak).
etag accepts these properties in the options object.
Specifies if the generated ETag will include the weak validator mark (that is, the leading W/). The actual entity tag is the same. The default value is false, unless the entity is fs.Stats, in which case it is true.
$ npm run-script bench
> etag@1.8.1 bench nodejs-etag
> node benchmark/index.js
http_parser@2.7.0
node@6.11.1
v8@5.1.281.103
uv@1.11.0
zlib@1.2.11
ares@1.10.1-DEV
icu@58.2
modules@48
openssl@1.0.2k
> node benchmark/body0-100b.js
100B body
4 tests completed.
buffer - strong x 258,647 ops/sec ±1.07% (180 runs sampled)
buffer - weak x 263,812 ops/sec ±0.61% (184 runs sampled)
string - strong x 259,955 ops/sec ±1.19% (185 runs sampled)
string - weak x 264,356 ops/sec ±1.09% (184 runs sampled)
> node benchmark/body1-1kb.js
1KB body
4 tests completed.
buffer - strong x 189,018 ops/sec ±1.12% (182 runs sampled)
buffer - weak x 190,586 ops/sec ±0.81% (186 runs sampled)
string - strong x 144,272 ops/sec ±0.96% (188 runs sampled)
string - weak x 145,380 ops/sec ±1.43% (187 runs sampled)
> node benchmark/body2-5kb.js
5KB body
4 tests completed.
buffer - strong x 92,435 ops/sec ±0.42% (188 runs sampled)
buffer - weak x 92,373 ops/sec ±0.58% (189 runs sampled)
string - strong x 48,850 ops/sec ±0.56% (186 runs sampled)
string - weak x 49,380 ops/sec ±0.56% (190 runs sampled)
> node benchmark/body3-10kb.js
10KB body
4 tests completed.
buffer - strong x 55,989 ops/sec ±0.93% (188 runs sampled)
buffer - weak x 56,148 ops/sec ±0.55% (190 runs sampled)
string - strong x 27,345 ops/sec ±0.43% (188 runs sampled)
string - weak x 27,496 ops/sec ±0.45% (190 runs sampled)
> node benchmark/body4-100kb.js
100KB body
4 tests completed.
buffer - strong x 7,083 ops/sec ±0.22% (190 runs sampled)
buffer - weak x 7,115 ops/sec ±0.26% (191 runs sampled)
string - strong x 3,068 ops/sec ±0.34% (190 runs sampled)
string - weak x 3,096 ops/sec ±0.35% (190 runs sampled)
> node benchmark/stats.js
stat
4 tests completed.
real - strong x 871,642 ops/sec ±0.34% (189 runs sampled)
real - weak x 867,613 ops/sec ±0.39% (190 runs sampled)
fake - strong x 401,051 ops/sec ±0.40% (189 runs sampled)
fake - weak x 400,100 ops/sec ±0.47% (188 runs sampled)This will let you identify and transform various git hosts URLs between protocols. It also can tell you what the URL is for the raw path for particular file for direct access without git.
var hostedGitInfo = require("hosted-git-info")
var info = hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git", opts)
/* info looks like:
{
type: "github",
domain: "github.com",
user: "npm",
project: "hosted-git-info"
}
*/If the URL can’t be matched with a git host, null will be returned. We can match git, ssh and https urls. Additionally, we can match ssh connect strings (git@github.com:npm/hosted-git-info) and shortcuts (eg, github:npm/hosted-git-info). Github specifically, is detected in the case of a third, unprefixed, form: npm/hosted-git-info.
If it does match, the returned object has properties of:
The major version will be bumped any time…
Implications:
.https() to be a part of the contract. The contract is that it will return a string that can be used to fetch the repo via HTTPS. But what that string looks like, specifically, can change.git+ won’t be prefixed on URLs.All of the methods take the same options as the fromUrl factory. Options provided to a method override those provided to the constructor.
Given the path of a file relative to the repository, returns a URL for directly fetching it from the githost. If no committish was set then master will be used as the default.
For example hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git#v1.0.0").file("package.json") would return https://raw.githubusercontent.com/npm/hosted-git-info/v1.0.0/package.json
eg, github:npm/hosted-git-info
eg, https://github.com/npm/hosted-git-info/tree/v1.2.0, https://github.com/npm/hosted-git-info/tree/v1.2.0/package.json, https://github.com/npm/hosted-git-info/tree/v1.2.0/REAMDE.md#supported-hosts
eg, https://github.com/npm/hosted-git-info/issues
eg, https://github.com/npm/hosted-git-info/tree/v1.2.0#readme
eg, git+https://github.com/npm/hosted-git-info.git
eg, git+ssh://git@github.com/npm/hosted-git-info.git
eg, git@github.com:npm/hosted-git-info.git
eg, npm/hosted-git-info
eg, https://github.com/npm/hosted-git-info/archive/v1.2.0.tar.gz
Returns the default output type. The default output type is based on the string you passed in to be parsed
Uses the getDefaultRepresentation to call one of the other methods to get a URL for this resource. As such hostedGitInfo.fromUrl(url).toString() will give you a normalized version of the URL that still uses the same protocol.
Shortcuts will still be returned as shortcuts, but the special case github form of org/project will be normalized to github:org/project.
SSH connect strings will be normalized into git+ssh URLs.
Currently this supports Github, Bitbucket and Gitlab. Pull requests for additional hosts welcome.
This will let you identify and transform various git hosts URLs between protocols. It also can tell you what the URL is for the raw path for particular file for direct access without git.
var hostedGitInfo = require("hosted-git-info")
var info = hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git", opts)
/* info looks like:
{
type: "github",
domain: "github.com",
user: "npm",
project: "hosted-git-info"
}
*/If the URL can’t be matched with a git host, null will be returned. We can match git, ssh and https urls. Additionally, we can match ssh connect strings (git@github.com:npm/hosted-git-info) and shortcuts (eg, github:npm/hosted-git-info). Github specifically, is detected in the case of a third, unprefixed, form: npm/hosted-git-info.
If it does match, the returned object has properties of:
The major version will be bumped any time…
Implications:
.https() to be a part of the contract. The contract is that it will return a string that can be used to fetch the repo via HTTPS. But what that string looks like, specifically, can change.git+ won’t be prefixed on URLs.All of the methods take the same options as the fromUrl factory. Options provided to a method override those provided to the constructor.
Given the path of a file relative to the repository, returns a URL for directly fetching it from the githost. If no committish was set then master will be used as the default.
For example hostedGitInfo.fromUrl("git@github.com:npm/hosted-git-info.git#v1.0.0").file("package.json") would return https://raw.githubusercontent.com/npm/hosted-git-info/v1.0.0/package.json
eg, github:npm/hosted-git-info
eg, https://github.com/npm/hosted-git-info/tree/v1.2.0, https://github.com/npm/hosted-git-info/tree/v1.2.0/package.json, https://github.com/npm/hosted-git-info/tree/v1.2.0/REAMDE.md#supported-hosts
eg, https://github.com/npm/hosted-git-info/issues
eg, https://github.com/npm/hosted-git-info/tree/v1.2.0#readme
eg, git+https://github.com/npm/hosted-git-info.git
eg, git+ssh://git@github.com/npm/hosted-git-info.git
eg, git@github.com:npm/hosted-git-info.git
eg, npm/hosted-git-info
eg, https://github.com/npm/hosted-git-info/archive/v1.2.0.tar.gz
Returns the default output type. The default output type is based on the string you passed in to be parsed
Uses the getDefaultRepresentation to call one of the other methods to get a URL for this resource. As such hostedGitInfo.fromUrl(url).toString() will give you a normalized version of the URL that still uses the same protocol.
Shortcuts will still be returned as shortcuts, but the special case github form of org/project will be normalized to github:org/project.
SSH connect strings will be normalized into git+ssh URLs.
Currently this supports Github, Bitbucket and Gitlab. Pull requests for additional hosts welcome.
Tokenizes strings that represent a regular expressions.
tokens will contain the following object
{
"type": ret.types.ROOT
"options": [
[ { "type": ret.types.CHAR, "value", 102 },
{ "type": ret.types.CHAR, "value", 111 },
{ "type": ret.types.CHAR, "value", 111 } ],
[ { "type": ret.types.CHAR, "value", 98 },
{ "type": ret.types.CHAR, "value", 97 },
{ "type": ret.types.CHAR, "value", 114 } ]
]
}ret.types is a collection of the various token types exported by ret.
Only used in the root of the regexp. This is needed due to the posibility of the root containing a pipe | character. In that case, the token will have an options key that will be an array of arrays of tokens. If not, it will contain a stack key that is an array of tokens.
Groups contain tokens that are inside of a parenthesis. If the group begins with ? followed by another character, it’s a special type of group. A ‘:’ tells the group not to be remembered when exec is used. ‘=’ means the previous token matches only if followed by this group, and ‘!’ means the previous token matches only if NOT followed.
Like root, it can contain an options key instead of stack if there is a pipe.
{
"type": ret.types.GROUP,
"remember" true,
"followedBy": false,
"notFollowedBy": false,
"stack": [token1, token2...],
}{
"type": ret.types.GROUP,
"remember" true,
"followedBy": false,
"notFollowedBy": false,
"options" [
[token1, token2...],
[othertoken1, othertoken2...]
...
],
}\b, \B, ^, and $ specify positions in the regexp.
Contains a key set specifying what tokens are allowed and a key not specifying if the set should be negated. A set can contain other sets, ranges, and characters.
Used in set tokens to specify a character range. from and to are character codes.
References a group token. value is 1-9.
Represents a single character token. value is the character code. This might seem a bit cluttering instead of concatenating characters together. But since repetition tokens only repeat the last token and not the last clause like the pipe, it’s simpler to do it this way.
ret.js will throw errors if given a string with an invalid regular expression. All possible errors are
? character is followed by an invalid character. It can only be followed by !, =, or :. Example: /(?_abc)//foo|?bar/, /{1,3}foo|bar/, /foo(+bar)//hello)2u//(1(23)4//[abc/npm install ret
Tests are written with vows
babel-eslint allows you to lint ALL valid Babel code with the fantastic ESLint.
You only need to use babel-eslint if you are using types (Flow) or experimental features not supported in ESLint itself yet. Otherwise try the default parser (you don’t have to use it just because you are using Babel).
If there is an issue, first check if it can be reproduced with the regular parser or with the latest versions of
eslintandbabel-eslint!
For questions and support please visit the #discussion babel slack channel (sign up here) or eslint gitter!
Note that the
ecmaFeaturesconfig property may still be required for ESLint to work properly with features not in ECMAScript 5 by default. Examples areglobalReturnandmodules).
Flow: > Check out eslint-plugin-flowtype: An eslint plugin that makes flow type annotations global variables and marks declarations as used. Solves the problem of false positives with no-undef and no-unused-vars. - no-undef for global flow types: ReactElement, ReactClass #130 - Workaround: define types as globals in .eslintrc or define types and import them import type ReactElement from './types' - no-unused-vars/no-undef with Flow declarations (declare module A {}) #132
Modules/strict mode - no-unused-vars: [2, {vars: local}] #136
Please check out eslint-plugin-react for React/JSX issues - no-unused-vars with jsx
Please check out eslint-plugin-babel for other issues
ESLint allows custom parsers. This is great but some of the syntax nodes that Babel supports aren’t supported by ESLint. When using this plugin, ESLint is monkeypatched and your code is transformed into code that ESLint can understand. All location info such as line numbers, columns is also retained so you can track down errors with ease.
Basically babel-eslint exports an index.js that a linter can use. It just needs to export a parse method that takes in a string of code and outputs an AST.
| ESLint | babel-eslint |
|---|---|
| 4.x | >= 6.x |
| 3.x | >= 6.x |
| 2.x | >= 6.x |
| 1.x | >= 5.x |
Ensure that you have substituted the correct version lock for eslint and babel-eslint into this command:
.eslintrc
Check out the ESLint docs for all possible rules.
sourceType can be set to 'module'(default) or 'script' if your code isn’t using ECMAScript modules.allowImportExportEverywhere (default false) can be set to true to allow import and export declarations to appear anywhere a statement is allowed if your build environment supports that. Otherwise import and export declarations can only appear at a program’s top level.codeFrame (default true) can be set to false to disable the code frame in the reporter. This is useful since some eslint formatters don’t play well with it..eslintrc
{
"parser": "babel-eslint",
"parserOptions": {
"sourceType": "module",
"allowImportExportEverywhere": false,
"codeFrame": true
}
}A wrapper around javascript array push/pop with a standard stack interface.
// empty stack
const stack = new Stack();
// from an array
const stack = new Stack([10, 3, 8, 40, 1]);// empty stack
const stack = Stack.fromArray([]);
// with elements
const list = [10, 3, 8, 40, 1];
const stack = Stack.fromArray(list);
// If the list should not be mutated, simply construct the stack from a copy of it.
const stack = Stack.fromArray(list.slice(0));push an element to the top of the stack.
| params | |
|---|---|
| name | type |
| element | object |
| runtime |
|---|
| O(1) |
returns the top element in the stack.
| return |
|---|
| object |
| runtime |
|---|
| O(1) |
removes and returns the top element of the stack.
| return |
|---|
| object |
| runtime |
|---|
| O(1) |
checks if the stack is empty.
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
returns the number of elements in the stack.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
creates a shallow copy of the stack.
| return |
|---|
| Stack |
| runtime |
|---|
| O(n) |
const stack = Stack.fromArray([{ id: 2 }, { id: 4 } , { id: 8 }]);
const clone = stack.clone();
clone.pop();
console.log(stack.peek()); // { id: 8 }
console.log(clone.peek()); // { id: 4 }returns a copy of the remaining elements as an array.
| return |
|---|
| array |
| runtime |
|---|
| O(n) |
clears all elements from the stack.
| runtime |
|---|
| O(1) |
grunt build
Common components for Cloud APIs Node.js Client Libraries
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
It’s unlikely you will need to install this package directly, as it will be installed as a dependency when you install other @google-cloud packages.
The Google Cloud Common Node.js Client API Reference documentation also contains samples.
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.
Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).
Legacy Node.js versions are supported as a best effort:
legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.This library follows Semantic Versioning.
This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.
Apache Version 2.0
See LICENSE
An HTTP request client that provides an
axioslike interface over top ofnode-fetch.
const {request} = require('gaxios');
const res = await request({
url: 'https://www.googleapis.com/discovery/v1/apis/'
});Gaxios supports setting default properties both on the default instance, and on additional instances. This is often useful when making many requests to the same domain with the same base settings. For example:
const gaxios = require('gaxios');
gaxios.instance.defaults = {
baseURL: 'https://example.com'
headers: {
Authorization: 'SOME_TOKEN'
}
}
gaxios.request({url: '/data'}).then(...);{
// The url to which the request should be sent. Required.
url: string,
// The HTTP method to use for the request. Defaults to `GET`.
method: 'GET',
// The base Url to use for the request. Prepended to the `url` property above.
baseURL: 'https://example.com';
// The HTTP methods to be sent with the request.
headers: { 'some': 'header' },
// The data to send in the body of the request. Data objects will be serialized as JSON.
data: {
some: 'data'
},
// The max size of the http response content in bytes allowed.
// Defaults to `0`, which is the same as unset.
maxContentLength: 2000,
// The max number of HTTP redirects to follow.
// Defaults to 100.
maxRedirects: 100,
// The querystring parameters that will be encoded using `qs` and
// appended to the url
params: {
querystring: 'parameters'
},
// By default, we use the `querystring` package in node core to serialize
// querystring parameters. You can override that and provide your
// own implementation.
paramsSerializer: (params) => {
return qs.stringify(params);
},
// The timeout for the HTTP request. Defaults to 0.
timeout: 1000,
// Optional method to override making the actual HTTP request. Useful
// for writing tests and instrumentation
adapter?: async (options, defaultAdapter) => {
const res = await defaultAdapter(options);
res.data = {
...res.data,
extraProperty: 'your extra property',
};
return res;
};
// The expected return type of the request. Options are:
// json | stream | blob | arraybuffer | text
// Defaults to `json`.
responseType: 'json',
// The node.js http agent to use for the request.
agent: someHttpsAgent,
// Custom function to determine if the response is valid based on the
// status code. Defaults to (>= 200 && < 300)
validateStatus: (status: number) => true,
// Configuration for retrying of requests.
retryConfig: {
// The number of times to retry the request. Defaults to 3.
retry?: number;
// The number of retries already attempted.
currentRetryAttempt?: number;
// The HTTP Methods that will be automatically retried.
// Defaults to ['GET','PUT','HEAD','OPTIONS','DELETE']
httpMethodsToRetry?: string[];
// The HTTP response status codes that will automatically be retried.
// Defaults to: [[100, 199], [429, 429], [500, 599]]
statusCodesToRetry?: number[][];
// Function to invoke when a retry attempt is made.
onRetryAttempt?: (err: GaxiosError) => Promise<void> | void;
// Function to invoke which determines if you should retry
shouldRetry?: (err: GaxiosError) => Promise<boolean> | boolean;
// When there is no response, the number of retries to attempt. Defaults to 2.
noResponseRetries?: number;
// The amount of time to initially delay the retry, in ms. Defaults to 100ms.
retryDelay?: number;
},
// Enables default configuration for retries.
retry: boolean,
// Cancelling a request requires the `abort-controller` library.
// See https://github.com/bitinn/node-fetch#request-cancellation-with-abortsignal
signal?: AbortSignal
}Returns true if an object was created by the
Objectconstructor, or Object.create(null).
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Use isobject if you only want to check if the value is an object and not an array or null.
with es modules
or with commonjs
true when created by the Object constructor, or Object.create(null).
isPlainObject(Object.create({}));
//=> true
isPlainObject(Object.create(Object.prototype));
//=> true
isPlainObject({foo: 'bar'});
//=> true
isPlainObject({});
//=> true
isPlainObject(null);
//=> truefalse when not created by the Object constructor.
isPlainObject(1);
//=> false
isPlainObject(['foo', 'bar']);
//=> false
isPlainObject([]);
//=> false
isPlainObject(new Foo);
//=> false
isPlainObject(Object.create(null));
//=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 19 | jonschlinkert |
| 6 | TrySound |
| 6 | stevenvachon |
| 3 | onokumus |
| 1 | wtgtybhertgeghgtwtg |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.8.0, on April 28, 2019.
POSIX character classes for creating regular expressions.
Install with npm:
Install with yarn:
The POSIX standard supports the following classes or categories of charactersh (note that classes must be defined within brackets)1:
| POSIX class | Equivalent to | Matches |
|---|---|---|
[:alnum:] |
[A-Za-z0-9] |
digits, uppercase and lowercase letters |
[:alpha:] |
[A-Za-z] |
upper- and lowercase letters |
[:ascii:] |
[\x00-\x7F] |
ASCII characters |
[:blank:] |
[ \t] |
space and TAB characters only |
[:cntrl:] |
[\x00-\x1F\x7F] |
Control characters |
[:digit:] |
[0-9] |
digits |
[:graph:] |
[^[:cntrl:]] |
graphic characters (all characters which have graphic representation) |
[:lower:] |
[a-z] |
lowercase letters |
[:print:] |
[[:graph] ] |
graphic characters and space |
[:punct:] |
[-!"#$%&'()*+,./:;<=>?@[]^_`{ | }~] |
all punctuation characters (all graphic characters except letters and digits) |
[:space:] |
[ \t\n\r\f\v] |
all blank (whitespace) characters, including spaces, tabs, new lines, carriage returns, form feeds, and vertical tabs |
[:upper:] |
[A-Z] |
uppercase letters |
[:word:] |
[A-Za-z0-9_] |
word characters |
[:xdigit:] |
[0-9A-Fa-f] |
hexadecimal digits |
a[[:digit:]]b matches a0b, a1b, …, a9b.a[:digit:]b is invalid, character classes must be enclosed in brackets[[:digit:]abc] matches any digit, as well as a, b, and c.[abc[:digit:]] is the same as the previous, matching any digit, as well as a, b, and c[^ABZ[:lower:]] matches any character except lowercase letters, A, B, and Z.Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 20, 2017.
Determine address of proxied request
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Return the address of the request, using the given trust parameter.
The trust argument is a function that returns true if you trust the address, false if you don’t. The closest untrusted address is returned.
proxyaddr(req, function (addr) { return addr === '127.0.0.1' })
proxyaddr(req, function (addr, i) { return i < 1 })The trust arugment may also be a single IP address string or an array of trusted addresses, as plain IP addresses, CIDR-formatted strings, or IP/netmask strings.
proxyaddr(req, '127.0.0.1')
proxyaddr(req, ['127.0.0.0/8', '10.0.0.0/8'])
proxyaddr(req, ['127.0.0.0/255.0.0.0', '192.168.0.0/255.255.0.0'])This module also supports IPv6. Your IPv6 addresses will be normalized automatically (i.e. fe80::00ed:1 equals fe80:0:0:0:0:0:ed:1).
This module will automatically work with IPv4-mapped IPv6 addresses as well to support node.js in IPv6-only mode. This means that you do not have to specify both ::ffff:a00:1 and 10.0.0.1.
As a convenience, this module also takes certain pre-defined names in addition to IP addresses, which expand into IP addresses:
loopback: IPv4 and IPv6 loopback addresses (like ::1 and 127.0.0.1).linklocal: IPv4 and IPv6 link-local addresses (like fe80::1:1:1:1 and 169.254.0.1).uniquelocal: IPv4 private addresses and IPv6 unique-local addresses (like fc00:ac:1ab5:fff::1 and 192.168.0.1).When trust is specified as a function, it will be called for each address to determine if it is a trusted address. The function is given two arguments: addr and i, where addr is a string of the address to check and i is a number that represents the distance from the socket address.
Return all the addresses of the request, optionally stopping at the first untrusted. This array is ordered from closest to furthest (i.e. arr[0] === req.connection.remoteAddress).
The optional trust argument takes the same arguments as trust does in proxyaddr(req, trust).
Compiles argument val into a trust function. This function takes the same arguments as trust does in proxyaddr(req, trust) and returns a function suitable for proxyaddr(req, trust).
This function is meant to be optimized for use against every request. It is recommend to compile a trust function up-front for the trusted configuration and pass that to proxyaddr(req, trust) for each request.
Returns true if the value is a number. comprehensive tests.
Install with npm:
To understand some of the rationale behind the decisions made in this library (and to learn about some oddities of number evaluation in JavaScript), see this gist.
See the tests for more examples.
isNumber(5e3) //=> 'true'
isNumber(0xff) //=> 'true'
isNumber(-1.1) //=> 'true'
isNumber(0) //=> 'true'
isNumber(1) //=> 'true'
isNumber(1.1) //=> 'true'
isNumber(10) //=> 'true'
isNumber(10.10) //=> 'true'
isNumber(100) //=> 'true'
isNumber('-1.1') //=> 'true'
isNumber('0') //=> 'true'
isNumber('012') //=> 'true'
isNumber('0xff') //=> 'true'
isNumber('1') //=> 'true'
isNumber('1.1') //=> 'true'
isNumber('10') //=> 'true'
isNumber('10.10') //=> 'true'
isNumber('100') //=> 'true'
isNumber('5e3') //=> 'true'
isNumber(parseInt('012')) //=> 'true'
isNumber(parseFloat('012')) //=> 'true'See the tests for more examples.
isNumber('foo') //=> 'false'
isNumber([1]) //=> 'false'
isNumber([]) //=> 'false'
isNumber(function () {}) //=> 'false'
isNumber(Infinity) //=> 'false'
isNumber(NaN) //=> 'false'
isNumber(new Array('abc')) //=> 'false'
isNumber(new Array(2)) //=> 'false'
isNumber(new Buffer('abc')) //=> 'false'
isNumber(null) //=> 'false'
isNumber(undefined) //=> 'false'
isNumber({abc: 'abc'}) //=> 'false'true if the value is a primitive. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.1.30, on September 10, 2016. # is-number
Returns true if the value is a number. comprehensive tests.
Install with npm:
To understand some of the rationale behind the decisions made in this library (and to learn about some oddities of number evaluation in JavaScript), see this gist.
See the tests for more examples.
isNumber(5e3) //=> 'true'
isNumber(0xff) //=> 'true'
isNumber(-1.1) //=> 'true'
isNumber(0) //=> 'true'
isNumber(1) //=> 'true'
isNumber(1.1) //=> 'true'
isNumber(10) //=> 'true'
isNumber(10.10) //=> 'true'
isNumber(100) //=> 'true'
isNumber('-1.1') //=> 'true'
isNumber('0') //=> 'true'
isNumber('012') //=> 'true'
isNumber('0xff') //=> 'true'
isNumber('1') //=> 'true'
isNumber('1.1') //=> 'true'
isNumber('10') //=> 'true'
isNumber('10.10') //=> 'true'
isNumber('100') //=> 'true'
isNumber('5e3') //=> 'true'
isNumber(parseInt('012')) //=> 'true'
isNumber(parseFloat('012')) //=> 'true'See the tests for more examples.
isNumber('foo') //=> 'false'
isNumber([1]) //=> 'false'
isNumber([]) //=> 'false'
isNumber(function () {}) //=> 'false'
isNumber(Infinity) //=> 'false'
isNumber(NaN) //=> 'false'
isNumber(new Array('abc')) //=> 'false'
isNumber(new Array(2)) //=> 'false'
isNumber(new Buffer('abc')) //=> 'false'
isNumber(null) //=> 'false'
isNumber(undefined) //=> 'false'
isNumber({abc: 'abc'}) //=> 'false'true if the value is a primitive. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
Utilities for working with TypeScript + ESLint together.
This package has inherited its version number from the @typescript-eslint project. Meaning that even though this package is 2.x.y, you shouldn’t expect 100% stability between minor version bumps. i.e. treat it as a 0.x.y package.
Feel free to use it now, and let us know what utilities you need or send us PRs with utilities you build on top of it.
Once it is stable, it will be renamed to @typescript-eslint/util for a 4.0.0 release.
| Name | Description |
|---|---|
ASTUtils |
Tools for operating on the ESTree AST. Also includes the eslint-utils package, correctly typed to work with the types found in TSESTree |
ESLintUtils |
Tools for creating ESLint rules with TypeScript. |
JSONSchema |
Types from the @types/json-schema package, re-exported to save you having to manually import them. Also ensures you’re using the same version of the types as this package. |
TSESLint |
Types for ESLint, correctly typed to work with the types found in TSESTree. |
TSESLintScope |
The eslint-scope package, correctly typed to work with the types found in both TSESTree and TSESLint |
TSESTree |
Types for the TypeScript flavor of ESTree created by @typescript-eslint/typescript-estree. |
AST_NODE_TYPES |
An enum with the names of every single node found in TSESTree. |
AST_TOKEN_TYPES |
An enum with the names of every single token found in TSESTree. |
ParserServices |
Typing for the parser services provided when parsing a file using @typescript-eslint/typescript-estree. |
See the contributing guide here
graceful-fs functions as a drop-in replacement for the fs module, making various improvements.
The improvements are meant to normalize behavior across different platforms and environments, and to make filesystem access more resilient to errors.
open and readdir calls, and retries them once something closes if there is an EMFILE error from too many file descriptors.lchmod for Node versions prior to 0.6.2.fs.lutimes if possible. Otherwise it becomes a noop.EINVAL and EPERM errors in chown, fchown or lchown if the user isn’t root.lchmod and lchown become noops, if not available.read results in EAGAIN error.// use just like fs
var fs = require('graceful-fs')
// now go and do stuff with it...
fs.readFileSync('some-file-or-whatever')If you want to patch the global fs module (or any other fs-like module) you can do this:
// Make sure to read the caveat below.
var realFs = require('fs')
var gracefulFs = require('graceful-fs')
gracefulFs.gracefulify(realFs)This should only ever be done at the top-level application layer, in order to delay on EMFILE errors from any fs-using dependencies. You should not do this in a library, because it can cause unexpected delays in other parts of the program.
This module is fairly stable at this point, and used by a lot of things. That being said, because it implements a subtle behavior change in a core part of the node API, even modest changes can be extremely breaking, and the versioning is thus biased towards bumping the major when in doubt.
The main change between major versions has been switching between providing a fully-patched fs module vs monkey-patching the node core builtin, and the approach by which a non-monkey-patched fs was created.
The goal is to trade EMFILE errors for slower fs operations. So, if you try to open a zillion files, rather than crashing, open operations will be queued up and wait for something else to close.
There are advantages to each approach. Monkey-patching the fs means that no EMFILE errors can possibly occur anywhere in your application, because everything is using the same core fs module, which is patched. However, it can also obviously cause undesirable side-effects, especially if the module is loaded multiple times.
Implementing a separate-but-identical patched fs module is more surgical (and doesn’t run the risk of patching multiple times), but also imposes the challenge of keeping in sync with the core module.
The current approach loads the fs module, and then creates a lookalike object that has all the same methods, except a few that are patched. It is safe to use in all versions of Node from 0.8 through 7.0.
Returns an array with only the unique values from the first array, by excluding all values from additional arrays using strict equality for comparisons.
Install with npm:
Install with yarn:
Install with bower
Returns the difference between the first array and additional arrays.
var diff = require('arr-diff');
var a = ['a', 'b', 'c', 'd'];
var b = ['b', 'c'];
console.log(diff(a, b))
//=> ['a', 'd']This library versus array-differ, on April 14, 2017:
Benchmarking: (4 of 4)
· long-dupes
· long
· med
· short
# benchmark/fixtures/long-dupes.js (100804 bytes)
arr-diff-3.0.0 x 822 ops/sec ±0.67% (86 runs sampled)
arr-diff-4.0.0 x 2,141 ops/sec ±0.42% (89 runs sampled)
array-differ x 708 ops/sec ±0.70% (89 runs sampled)
fastest is arr-diff-4.0.0
# benchmark/fixtures/long.js (94529 bytes)
arr-diff-3.0.0 x 882 ops/sec ±0.60% (87 runs sampled)
arr-diff-4.0.0 x 2,329 ops/sec ±0.97% (83 runs sampled)
array-differ x 769 ops/sec ±0.61% (90 runs sampled)
fastest is arr-diff-4.0.0
# benchmark/fixtures/med.js (708 bytes)
arr-diff-3.0.0 x 856,150 ops/sec ±0.42% (89 runs sampled)
arr-diff-4.0.0 x 4,665,249 ops/sec ±1.06% (89 runs sampled)
array-differ x 653,888 ops/sec ±1.02% (86 runs sampled)
fastest is arr-diff-4.0.0
# benchmark/fixtures/short.js (60 bytes)
arr-diff-3.0.0 x 3,078,467 ops/sec ±0.77% (93 runs sampled)
arr-diff-4.0.0 x 9,213,296 ops/sec ±0.65% (89 runs sampled)
array-differ x 1,337,051 ops/sec ±0.91% (92 runs sampled)
fastest is arr-diff-4.0.0
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 33 | jonschlinkert |
| 2 | paulmillr |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 14, 2017. # convert-source-map
Converts a source-map from/to different formats and allows adding/changing properties.
var convert = require('convert-source-map');
var json = convert
.fromComment('//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiYnVpbGQvZm9vLm1pbi5qcyIsInNvdXJjZXMiOlsic3JjL2Zvby5qcyJdLCJuYW1lcyI6W10sIm1hcHBpbmdzIjoiQUFBQSIsInNvdXJjZVJvb3QiOiIvIn0=')
.toJSON();
var modified = convert
.fromComment('//# sourceMappingURL=data:application/json;base64,eyJ2ZXJzaW9uIjozLCJmaWxlIjoiYnVpbGQvZm9vLm1pbi5qcyIsInNvdXJjZXMiOlsic3JjL2Zvby5qcyJdLCJuYW1lcyI6W10sIm1hcHBpbmdzIjoiQUFBQSIsInNvdXJjZVJvb3QiOiIvIn0=')
.setProperty('sources', [ 'SRC/FOO.JS' ])
.toJSON();
console.log(json);
console.log(modified);{"version":3,"file":"build/foo.min.js","sources":["src/foo.js"],"names":[],"mappings":"AAAA","sourceRoot":"/"}
{"version":3,"file":"build/foo.min.js","sources":["SRC/FOO.JS"],"names":[],"mappings":"AAAA","sourceRoot":"/"}Returns source map converter from given object.
Returns source map converter from given json string.
Returns source map converter from given base64 encoded json string.
Returns source map converter from given base64 encoded json string prefixed with //# sourceMappingURL=....
Returns source map converter from given filename by parsing //# sourceMappingURL=filename.
filename must point to a file that is found inside the mapFileDir. Most tools store this file right next to the generated file, i.e. the one containing the source map.
Finds last sourcemap comment in file and returns source map converter or returns null if no source map comment was found.
Finds last sourcemap comment in file and returns source map converter or returns null if no source map comment was found.
The sourcemap will be read from the map file found by parsing # sourceMappingURL=file comment. For more info see fromMapFileComment.
Returns a copy of the underlying source map.
Converts source map to json string. If space is given (optional), this will be passed to JSON.stringify when the JSON string is generated.
Converts source map to base64 encoded json string.
Converts source map to an inline comment that can be appended to the source-file.
By default, the comment is formatted like: //# sourceMappingURL=..., which you would normally see in a JS source file.
When options.multiline == true, the comment is formatted like: /*# sourceMappingURL=... */, which you would find in a CSS source file.
Adds given property to the source map. Throws an error if property already exists.
Sets given property to the source map. If property doesn’t exist it is added, otherwise its value is updated.
Gets given property of the source map.
Returns src with all source map comments removed
Returns src with all source map comments pointing to map files removed.
Provides a fresh RegExp each time it is accessed. Can be used to find source map comments.
Provides a fresh RegExp each time it is accessed. Can be used to find source map comments pointing to map files.
Returns a comment that links to an external source map via file.
By default, the comment is formatted like: //# sourceMappingURL=..., which you would normally see in a JS source file.
When options.multiline == true, the comment is formatted like: /*# sourceMappingURL=... */, which you would find in a CSS source file.
Create a javascript regular expression for matching everything except for the given string.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
The main export is a function that takes a string an options object.
Example
Strict matching
By default, the returned regex is for strictly (not) matching the exact given pattern (in other words, “match this string if it does NOT exactly equal foo”):
var re = not('foo');
console.log(re.test('foo')); //=> false
console.log(re.test('bar')); //=> true
console.log(re.test('foobar')); //=> true
console.log(re.test('barfoo')); //=> trueReturns a string to allow you to create your own regex:
options.contains
You can relax strict matching by setting options.contains to true (in other words, “match this string if it does NOT contain foo”):
var re = not('foo');
console.log(re.test('foo', {contains: true})); //=> false
console.log(re.test('bar', {contains: true})); //=> true
console.log(re.test('foobar', {contains: true})); //=> false
console.log(re.test('barfoo', {contains: true})); //=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 9 | jonschlinkert |
| 1 | doowb |
| 1 | EdwardBetts |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on February 19, 2018. # has-value
Returns true if a value exists, false if empty. Works with deeply nested values using object paths.
Install with npm:
Works for:
Works with property values (supports object-path notation, like foo.bar) or a single value:
var hasValue = require('has-value');
hasValue('foo');
hasValue({foo: 'bar'}, 'foo');
hasValue({a: {b: {c: 'foo'}}}, 'a.b.c');
//=> true
hasValue('');
hasValue({foo: ''}, 'foo');
//=> false
hasValue(0);
hasValue(1);
hasValue({foo: 0}, 'foo');
hasValue({foo: 1}, 'foo');
hasValue({foo: null}, 'foo');
hasValue({foo: {bar: 'a'}}}, 'foo');
hasValue({foo: {bar: 'a'}}}, 'foo.bar');
//=> true
hasValue({foo: {}}}, 'foo');
hasValue({foo: {bar: {}}}}, 'foo.bar');
hasValue({foo: undefined}, 'foo');
//=> false
hasValue([]);
hasValue([[]]);
hasValue([[], []]);
hasValue([undefined]);
hasValue({foo: []}, 'foo');
//=> false
hasValue([0]);
hasValue([null]);
hasValue(['foo']);
hasValue({foo: ['a']}, 'foo');
//=> true
hasValue(function() {})
hasValue(function(foo) {})
hasValue({foo: function(foo) {}}, 'foo');
hasValue({foo: function() {}}, 'foo');
//=> true
hasValue(true);
hasValue(false);
hasValue({foo: true}, 'foo');
hasValue({foo: false}, 'foo');
//=> trueTo do the opposite and test for empty values, do:
zero always returns truearray now recurses, so that an array of empty arrays will return falsenull now returns truea.b.c) to get a nested value from an object. | homepage'a.b.c') paths. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 17 | jonschlinkert |
| 2 | rmharrison |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on May 19, 2017. # Acorn AST walker
An abstract syntax tree walker for the ESTree format.
You are welcome to report bugs or create pull requests on github. For questions and discussion, please use the Tern discussion forum.
The easiest way to install acorn is from npm:
Alternately, you can download the source and build acorn yourself:
An algorithm for recursing through a syntax tree is stored as an object, with a property for each tree node type holding a function that will recurse through such a node. There are several ways to run such a walker.
simple(node, visitors, base, state) does a ‘simple’ walk over a tree. node should be the AST node to walk, and visitors an object with properties whose names correspond to node types in the ESTree spec. The properties should contain functions that will be called with the node object and, if applicable the state at that point. The last two arguments are optional. base is a walker algorithm, and state is a start state. The default walker will simply visit all statements and expressions and not produce a meaningful state. (An example of a use of state is to track scope at each point in the tree.)
const acorn = require("acorn")
const walk = require("acorn-walk")
walk.simple(acorn.parse("let x = 10"), {
Literal(node) {
console.log(`Found a literal: ${node.value}`)
}
})ancestor(node, visitors, base, state) does a ‘simple’ walk over a tree, building up an array of ancestor nodes (including the current node) and passing the array to the callbacks as a third parameter.
const acorn = require("acorn")
const walk = require("acorn-walk")
walk.ancestor(acorn.parse("foo('hi')"), {
Literal(_, ancestors) {
console.log("This literal's ancestors are:", ancestors.map(n => n.type))
}
})recursive(node, state, functions, base) does a ‘recursive’ walk, where the walker functions are responsible for continuing the walk on the child nodes of their target node. state is the start state, and functions should contain an object that maps node types to walker functions. Such functions are called with (node, state, c) arguments, and can cause the walk to continue on a sub-node by calling the c argument on it with (node, state) arguments. The optional base argument provides the fallback walker functions for node types that aren’t handled in the functions object. If not given, the default walkers will be used.
make(functions, base) builds a new walker object by using the walker functions in functions and filling in the missing ones by taking defaults from base.
full(node, callback, base, state) does a ‘full’ walk over a tree, calling the callback with the arguments (node, state, type) for each node
fullAncestor(node, callback, base, state) does a ‘full’ walk over a tree, building up an array of ancestor nodes (including the current node) and passing the array to the callbacks as a third parameter.
const acorn = require("acorn")
const walk = require("acorn-walk")
walk.full(acorn.parse("1 + 1"), node => {
console.log(`There's a ${node.type} node at ${node.ch}`)
})findNodeAt(node, start, end, test, base, state) tries to locate a node in a tree at the given start and/or end offsets, which satisfies the predicate test. start and end can be either null (as wildcard) or a number. test may be a string (indicating a node type) or a function that takes (nodeType, node) arguments and returns a boolean indicating whether this node is interesting. base and state are optional, and can be used to specify a custom walker. Nodes are tested from inner to outer, so if two nodes match the boundaries, the inner one will be preferred.
findNodeAround(node, pos, test, base, state) is a lot like findNodeAt, but will match any node that exists ‘around’ (spanning) the given position.
findNodeAfter(node, pos, test, base, state) is similar to findNodeAround, but will match all nodes after the given position (testing outer nodes before inner nodes).
A cache for managing namespaced sub-caches
Install with npm:
Create a new FragmentCache with an optional object to use for caches.
Example
Params
cacheName {String}returns {Object}: Returns the map-cache instance.Get cache name from the fragment.caches object. Creates a new MapCache if it doesn’t already exist.
Example
var cache = fragment.cache('files');
console.log(fragment.caches.hasOwnProperty('files'));
//=> trueParams
cacheName {String}returns {Object}: Returns the map-cache instance.Set a value for property key on cache name
Example
Params
name {String}key {String}: Property name to setval {any}: The value of keyreturns {Object}: The cache instance for chainingReturns true if a non-undefined value is set for key on fragment cache name.
Example
var cache = fragment.cache('files');
cache.set('somefile.js');
console.log(cache.has('somefile.js'));
//=> true
console.log(cache.has('some-other-file.js'));
//=> falseParams
name {String}: Cache namekey {String}: Optionally specify a property to check for on cache namereturns {Boolean}Get name, or if specified, the value of key. Invokes the cache method, so that cache name will be created it doesn’t already exist. If key is not passed, the entire cache (name) is returned.
Example
var Vinyl = require('vinyl');
var cache = fragment.cache('files');
cache.set('somefile.js', new Vinyl({path: 'somefile.js'}));
console.log(cache.get('somefile.js'));
//=> <File "somefile.js">Params
name {String}returns {Object}: Returns cache name, or the value of key if specifiedPull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.2.0, on October 17, 2016. # extend-shallow
Extend an object with the properties of additional objects. node.js/javascript util.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Pass an empty object to shallow clone:
Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
Object constructor. | homepage| Commits | Contributor |
|---|---|
| 33 | jonschlinkert |
| 1 | pdehaan |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 19, 2017. # YAML
yaml is a JavaScript parser and stringifier for YAML, a human friendly data serialization standard. It supports both parsing and stringifying data using all versions of YAML, along with all common data schemas. As a particularly distinguishing feature, yaml fully supports reading and writing comments and blank lines in YAML documents.
For the purposes of versioning, any changes that break any of the endpoints or APIs documented here will be considered semver-major breaking changes. Undocumented library internals may change between minor versions, and previous APIs may be deprecated (but not removed).
For more information, see the project’s documentation site: eemeli.org/yaml
To install:
Note: yaml 0.x and 1.x are rather different implementations. For the earlier yaml, see tj/js-yaml.
The API provided by yaml has three layers, depending on how deep you need to go: Parse & Stringify, Documents, and the CST Parser. The first has the simplest API and “just works”, the second gets you all the bells and whistles supported by the library along with a decent AST, and the third is the closest to YAML source, making it fast, raw, and crude.
YAML.createNode(value, wrapScalars, tag): NodeYAML.defaultOptionsYAML.Document
YAML.parseAllDocuments(str, options): YAML.Document[]YAML.parseDocument(str, options): YAML.Document# file.yml
YAML:
- A human-readable data serialization language
- https://en.wikipedia.org/wiki/YAML
yaml:
- A complete JavaScript implementation
- https://www.npmjs.com/package/yamlimport fs from 'fs'
import YAML from 'yaml'
YAML.parse('3.14159')
// 3.14159
YAML.parse('[ true, false, maybe, null ]\n')
// [ true, false, 'maybe', null ]
const file = fs.readFileSync('./file.yml', 'utf8')
YAML.parse(file)
// { YAML:
// [ 'A human-readable data serialization language',
// 'https://en.wikipedia.org/wiki/YAML' ],
// yaml:
// [ 'A complete JavaScript implementation',
// 'https://www.npmjs.com/package/yaml' ] }import YAML from 'yaml'
YAML.stringify(3.14159)
// '3.14159\n'
YAML.stringify([true, false, 'maybe', null])
// `- true
// - false
// - maybe
// - null
// `
YAML.stringify({ number: 3, plain: 'string', block: 'two\nlines\n' })
// `number: 3
// plain: string
// block: >
// two
//
// lines
// `Browser testing provided by:
Returns true if the platform is windows. UMD module, works with node.js, commonjs, browser, AMD, electron, etc.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
As of v0.2.0 this module always returns a function.
var isWindows = require('is-windows');
console.log(isWindows());
//=> returns true if the platform is windowsContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
true if the given string looks like a glob pattern or an extglob pattern… more | homepagetrue if the path appears to be relative. | homepage| Commits | Contributor |
|---|---|
| 11 | jonschlinkert |
| 4 | doowb |
| 1 | SimenB |
| 1 | gucong3000 |
Jon Schlinkert
Extract the non-magic parent path from a glob string.
var globParent = require('glob-parent');
globParent('path/to/*.js'); // 'path/to'
globParent('/root/path/to/*.js'); // '/root/path/to'
globParent('/*.js'); // '/'
globParent('*.js'); // '.'
globParent('**/*.js'); // '.'
globParent('path/{to,from}'); // 'path'
globParent('path/!(to|from)'); // 'path'
globParent('path/?(to|from)'); // 'path'
globParent('path/+(to|from)'); // 'path'
globParent('path/*(to|from)'); // 'path'
globParent('path/@(to|from)'); // 'path'
globParent('path/**/*'); // 'path'
// if provided a non-glob path, returns the nearest dir
globParent('path/foo/bar.js'); // 'path/foo'
globParent('path/foo/'); // 'path/foo'
globParent('path/foo'); // 'path' (see issue #3 for details)globParent(maybeGlobString, [options])Takes a string and returns the part of the path before the glob begins. Be aware of Escaping rules and Limitations below.
The following characters have special significance in glob patterns and must be escaped if you want them to be treated as regular path characters:
? (question mark) unless used as a path segment alone* (asterisk)| (pipe)( (opening parenthesis)) (closing parenthesis){ (opening curly brace)} (closing curly brace)[ (opening bracket)] (closing bracket)Example
This library attempts a quick and imperfect method of determining which path parts have glob magic without fully parsing/lexing the pattern. There are some advanced use cases that can trip it up, such as nested braces where the outer pair is escaped and the inner one contains a path separator. If you find yourself in the unlikely circumstance of being affected by this or need to ensure higher-fidelity glob handling in your library, it is recommended that you pre-process your input with expand-braces and/or expand-brackets.
Backslashes are not valid path separators for globs. If a path with backslashes is provided anyway, for simple cases, glob-parent will replace the path separator for you and return the non-glob parent path (now with forward-slashes, which are still valid as Windows path separators).
This cannot be used in conjunction with escape characters.
// BAD
globParent('C:\\Program Files \\(x86\\)\\*.ext') // 'C:/Program Files /(x86/)'
// GOOD
globParent('C:/Program Files\\(x86\\)/*.ext') // 'C:/Program Files (x86)'If you are using escape characters for a pattern without path parts (i.e. relative to cwd), prefix with ./ to avoid confusing glob-parent.
// BAD
globParent('foo \\[bar]') // 'foo '
globParent('foo \\[bar]*') // 'foo '
// GOOD
globParent('./foo \\[bar]') // 'foo [bar]'
globParent('./foo \\[bar]*') // '.'ISC
Node.js core streams for userland
This package is a mirror of the streams implementations in Node.js.
Full documentation may be found on the Node.js website.
If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use readable-stream only and avoid the “stream” module in Node-core, for background see this blogpost.
As of version 2.0.0 readable-stream uses semantic versioning.
v3.x.x of readable-stream is a cut from Node 10. This version supports Node 6, 8, and 10, as well as evergreen browsers, IE 11 and latest Safari. The breaking changes introduced by v3 are composed by the combined breaking changes in Node v9 and Node v10, as follows:
v2.x.x of readable-stream is a cut of the stream module from Node 8 (there have been no semver-major changes from Node 4 to 8). This version supports all Node.js versions from 0.8, as well as evergreen browsers and IE 10 & 11.
Cross-browser Testing Platform and Open Source <3 Provided by Sauce Labs
You can swap your require('stream') with require('readable-stream') without any changes, if you are just using one of the main classes and functions.
Note that require('stream') will return Stream, while require('readable-stream') will return Readable. We discourage using whatever is exported directly, but rather use one of the properties as shown in the example above.
readable-stream is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:
readable-stream to be included in Node.js.Node.js core streams for userland
This package is a mirror of the streams implementations in Node.js.
Full documentation may be found on the Node.js website.
If you want to guarantee a stable streams base, regardless of what version of Node you, or the users of your libraries are using, use readable-stream only and avoid the “stream” module in Node-core, for background see this blogpost.
As of version 2.0.0 readable-stream uses semantic versioning.
v3.x.x of readable-stream is a cut from Node 10. This version supports Node 6, 8, and 10, as well as evergreen browsers, IE 11 and latest Safari. The breaking changes introduced by v3 are composed by the combined breaking changes in Node v9 and Node v10, as follows:
v2.x.x of readable-stream is a cut of the stream module from Node 8 (there have been no semver-major changes from Node 4 to 8). This version supports all Node.js versions from 0.8, as well as evergreen browsers and IE 10 & 11.
Cross-browser Testing Platform and Open Source <3 Provided by Sauce Labs
You can swap your require('stream') with require('readable-stream') without any changes, if you are just using one of the main classes and functions.
Note that require('stream') will return Stream, while require('readable-stream') will return Readable. We discourage using whatever is exported directly, but rather use one of the properties as shown in the example above.
readable-stream is maintained by the Streams Working Group, which oversees the development and maintenance of the Streams API within Node.js. The responsibilities of the Streams Working Group include:
readable-stream to be included in Node.js.Yet Another Linked List
There are many doubly-linked list implementations like it, but this one is mine.
For when an array would be too big, and a Map can’t be iterated in reverse order.
var yallist = require('yallist')
var myList = yallist.create([1, 2, 3])
myList.push('foo')
myList.unshift('bar')
// of course pop() and shift() are there, too
console.log(myList.toArray()) // ['bar', 1, 2, 3, 'foo']
myList.forEach(function (k) {
// walk the list head to tail
})
myList.forEachReverse(function (k, index, list) {
// walk the list tail to head
})
var myDoubledList = myList.map(function (k) {
return k + k
})
// now myDoubledList contains ['barbar', 2, 4, 6, 'foofoo']
// mapReverse is also a thing
var myDoubledListReverse = myList.mapReverse(function (k) {
return k + k
}) // ['foofoo', 6, 4, 2, 'barbar']
var reduced = myList.reduce(function (set, entry) {
set += entry
return set
}, 'start')
console.log(reduced) // 'startfoo123bar'The whole API is considered “public”.
Functions with the same name as an Array method work more or less the same way.
There’s reverse versions of most things because that’s the point.
Default export, the class that holds and manages a list.
Call it with either a forEach-able (like an array) or a set of arguments, to initialize the list.
The Array-ish methods all act like you’d expect. No magic length, though, so if you change that it won’t automatically prune or add empty spots.
Alias for Yallist function. Some people like factories.
The first node in the list
The last node in the list
The number of nodes in the list. (Change this at your peril. It is not magic like Array length.)
Convert the list to an array.
Call a function on each item in the list.
Call a function on each item in the list, in reverse order.
Get the data at position n in the list. If you use this a lot, probably better off just using an Array.
Get the data at position n, counting from the tail.
Create a new Yallist with the result of calling the function on each item.
Same as map, but in reverse.
Get the data from the list tail, and remove the tail from the list.
Insert one or more items to the tail of the list.
Like Array.reduce.
Like Array.reduce, but in reverse.
Reverse the list in place.
Get the data from the list head, and remove the head from the list.
Just like Array.slice, but returns a new Yallist.
Just like yallist.slice, but the result is returned in reverse.
Create an array representation of the list.
Create a reversed array representation of the list.
Insert one or more items to the head of the list.
Move a Node object to the front of the list. (That is, pull it out of wherever it lives, and make it the new head.)
If the node belongs to a different list, then that list will remove it first.
Move a Node object to the end of the list. (That is, pull it out of wherever it lives, and make it the new tail.)
If the node belongs to a list already, then that list will remove it first.
Remove a node from the list, preserving referential integrity of head and tail and other nodes.
Will throw an error if you try to have a list remove a node that doesn’t belong to it.
The class that holds the data and is actually the list.
Call with var n = new Node(value, previousNode, nextNode)
Note that if you do direct operations on Nodes themselves, it’s very easy to get into weird states where the list is broken. Be careful :)
The next node in the list.
The previous node in the list.
The data the node contains.
The list to which this node belongs. (Null if it does not belong to any list.)
A cross platform solution to node’s spawn and spawnSync.
Node.js version 8 and up: npm install cross-spawn
Node.js version 7 and under: npm install cross-spawn@6
Node has issues when using spawn on Windows:
./my-folder/my-executable)node_modules/.bin/), where arguments with quotes and parenthesis would result in invalid syntax erroroptions.shell support on node <v4.8All these issues are handled correctly by cross-spawn. There are some known modules, such as win-spawn, that try to solve this but they are either broken or provide faulty escaping of shell arguments.
Exactly the same way as node’s spawn or spawnSync, so it’s a drop in replacement.
const spawn = require('cross-spawn');
// Spawn NPM asynchronously
const child = spawn('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' });
// Spawn NPM synchronously
const result = spawn.sync('npm', ['list', '-g', '-depth', '0'], { stdio: 'inherit' });options.shell as an alternative to cross-spawnStarting from node v4.8, spawn has a shell option that allows you run commands from within a shell. This new option solves the PATHEXT issue but:
<v4.8If you are using the shell option to spawn a command in a cross platform way, consider using cross-spawn instead. You have been warned.
options.shell supportWhile cross-spawn adds support for options.shell in node <v4.8, all of its enhancements are disabled.
This mimics the Node.js behavior. More specifically, the command and its arguments will not be automatically escaped nor shebang support will be offered. This is by design because if you are using options.shell you are probably targeting a specific platform anyway and you don’t want things to get into your way.
While cross-spawn handles shebangs on Windows, its support is limited. More specifically, it just supports #!/usr/bin/env <program> where <program> must not contain any arguments.
If you would like to have the shebang support improved, feel free to contribute via a pull-request.
Remember to always test your code on Windows!
npm test
npm test -- --watch during development
Returns true if a file path is absolute. Does not rely on the path module and can be used as a polyfill for node.js native
path.isAbolute.
Install with npm:
Originally based on the isAbsolute utility method in express.
var isAbsolute = require('is-absolute');
isAbsolute('a/b/c.js');
//=> 'false'
isAbsolute('/a/b/c.js');
//=> 'true'Explicitly test windows paths
isAbsolute.posix('/foo/bar');
isAbsolute.posix('/user/docs/Letter.txt');
//=> true
isAbsolute.posix('foo/bar');
//=> falseExplicitly test windows paths
var isAbsolute = require('is-absolute');
isAbsolute.win32('c:\\');
isAbsolute.win32('//C://user\\docs\\Letter.txt');
isAbsolute.win32('\\\\unc\\share');
isAbsolute.win32('\\\\unc\\share\\foo');
isAbsolute.win32('\\\\unc\\share\\foo\\');
isAbsolute.win32('\\\\unc\\share\\foo\\bar');
isAbsolute.win32('\\\\unc\\share\\foo\\bar\\');
isAbsolute.win32('\\\\unc\\share\\foo\\bar\\baz');
//=> true
isAbsolute.win32('a:foo/a/b/c/d');
isAbsolute.win32(':\\');
isAbsolute.win32('foo\\bar\\baz');
isAbsolute.win32('foo\\bar\\baz\\');
//=> falsetrue if the given string looks like a glob pattern or an extglob pattern… more | homepagetrue if the path appears to be relative. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 35 | jonschlinkert |
| 1 | es128 |
| 1 | shinnn |
| 1 | Sobak |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 13, 2017. https-proxy-agent ================ ### An HTTP(s) proxy http.Agent implementation for HTTPS
This module provides an http.Agent implementation that connects to a specified HTTP or HTTPS proxy server, and can be used with the built-in https module.
Specifically, this Agent implementation connects to an intermediary “proxy” server and issues the CONNECT HTTP method, which tells the proxy to open a direct TCP connection to the destination server.
Since this agent implements the CONNECT HTTP method, it also works with other protocols that use this method when connecting over proxies (i.e. WebSockets). See the “Examples” section below for more.
Install with npm:
https module examplevar url = require('url');
var https = require('https');
var HttpsProxyAgent = require('https-proxy-agent');
// HTTP/HTTPS proxy to connect to
var proxy = process.env.http_proxy || 'http://168.63.76.32:3128';
console.log('using proxy server %j', proxy);
// HTTPS endpoint for the proxy to connect to
var endpoint = process.argv[2] || 'https://graph.facebook.com/tootallnate';
console.log('attempting to GET %j', endpoint);
var options = url.parse(endpoint);
// create an instance of the `HttpsProxyAgent` class with the proxy server information
var agent = new HttpsProxyAgent(proxy);
options.agent = agent;
https.get(options, function (res) {
console.log('"response" event!', res.headers);
res.pipe(process.stdout);
});ws WebSocket connection examplevar url = require('url');
var WebSocket = require('ws');
var HttpsProxyAgent = require('https-proxy-agent');
// HTTP/HTTPS proxy to connect to
var proxy = process.env.http_proxy || 'http://168.63.76.32:3128';
console.log('using proxy server %j', proxy);
// WebSocket endpoint for the proxy to connect to
var endpoint = process.argv[2] || 'ws://echo.websocket.org';
var parsed = url.parse(endpoint);
console.log('attempting to connect to WebSocket %j', endpoint);
// create an instance of the `HttpsProxyAgent` class with the proxy server information
var options = url.parse(proxy);
var agent = new HttpsProxyAgent(options);
// finally, initiate the WebSocket connection
var socket = new WebSocket(endpoint, { agent: agent });
socket.on('open', function () {
console.log('"open" event!');
socket.send('hello world');
});
socket.on('message', function (data, flags) {
console.log('"message" event! %j %j', data, flags);
socket.close();
});The HttpsProxyAgent class implements an http.Agent subclass that connects to the specified “HTTP(s) proxy server” in order to proxy HTTPS and/or WebSocket requests. This is achieved by using the HTTP CONNECT method.
The options argument may either be a string URI of the proxy server to use, or an “options” object with more specific properties:
host - String - Proxy host to connect to (may use hostname as well). Required.port - Number - Proxy port to connect to. Required.protocol - String - If https:, then use TLS to connect to the proxy.headers - Object - Additional HTTP headers to be sent on the HTTP CONNECT method.net.connect()/tls.connect() functions.An incremental implementation of the MurmurHash3 (32-bit) hashing algorithm for JavaScript based on Gary Court’s implementation with kazuyukitanimura’s modifications.
This version works significantly faster than the non-incremental version if you need to hash many small strings into a single hash, since string concatenation (to build the single string to pass the non-incremental version) is fairly costly. In one case tested, using the incremental version was about 50% faster than concatenating 5-10 strings and then hashing.
To use iMurmurHash in the browser, download the latest version and include it as a script on your site.
<script type="text/javascript" src="/scripts/imurmurhash.min.js"></script>
<script>
// Your code here, access iMurmurHash using the global object MurmurHash3
</script>To use iMurmurHash in Node.js, install the module using NPM:
Then simply include it in your scripts:
// Create the initial hash
var hashState = MurmurHash3('string');
// Incrementally add text
hashState.hash('more strings');
hashState.hash('even more strings');
// All calls can be chained if desired
hashState.hash('and').hash('some').hash('more');
// Get a result
hashState.result();
// returns 0xe4ccfe6bGet a hash state object, optionally initialized with the given string and seed. Seed must be a positive integer if provided. Calling this function without the new keyword will return a cached state object that has been reset. This is safe to use as long as the object is only used from a single thread and no other hashes are created while operating on this one. If this constraint cannot be met, you can use new to create a new state object. For example:
// Use the cached object, calling the function again will return the same
// object (but reset, so the current state would be lost)
hashState = MurmurHash3();
...
// Create a new object that can be safely used however you wish. Calling the
// function again will simply return a new state object, and no state loss
// will occur, at the cost of creating more objects.
hashState = new MurmurHash3();Both methods can be mixed however you like if you have different use cases.
Incrementally add string to the hash. This can be called as many times as you want for the hash state object, including after a call to result(). Returns this so calls can be chained.
// Do the whole string at once
MurmurHash3('this is a test string').result();
// 0x70529328
// Do part of the string, get a result, then the other part
var m = MurmurHash3('this is a');
m.result();
// 0xbfc4f834
m.hash(' test string').result();
// 0x70529328 (same as above)Reset the state object for reuse, optionally using the given seed (defaults to 0 like the constructor). Returns this so calls can be chained.
This library is a super small wrapper over node’s assert module that has two things: (1) the ability to disable assertions with the environment variable NODE_NDEBUG, and (2) some API wrappers for argument testing. Like assert.string(myArg, 'myArg'). As a simple example, most of my code looks like this:
var assert = require('assert-plus');
function fooAccount(options, callback) {
assert.object(options, 'options');
assert.number(options.id, 'options.id');
assert.bool(options.isManager, 'options.isManager');
assert.string(options.name, 'options.name');
assert.arrayOfString(options.email, 'options.email');
assert.func(callback, 'callback');
// Do stuff
callback(null, {});
}All methods that aren’t part of node’s core assert API are simply assumed to take an argument, and then a string ‘name’ that’s not a message; AssertionError will be thrown if the assertion fails with a message like:
AssertionError: foo (string) is required
at test (/home/mark/work/foo/foo.js:3:9)
at Object.<anonymous> (/home/mark/work/foo/foo.js:15:1)
at Module._compile (module.js:446:26)
at Object..js (module.js:464:10)
at Module.load (module.js:353:31)
at Function._load (module.js:311:12)
at Array.0 (module.js:484:10)
at EventEmitter._tickCallback (node.js:190:38)
from:
There you go. You can check that arrays are of a homogeneous type with Arrayof$Type:
You can assert IFF an argument is not undefined (i.e., an optional arg):
Lastly, you can opt-out of assertion checking altogether by setting the environment variable NODE_NDEBUG=1. This is pseudo-useful if you have lots of assertions, and don’t want to pay typeof () taxes to v8 in production. Be advised: The standard functions re-exported from assert are also disabled in assert-plus if NDEBUG is specified. Using them directly from the assert module avoids this behavior.
The complete list of APIs is:
npm install assert-plus
See https://github.com/mcavage/node-assert-plus/issues.
An HTTP content negotiator for Node.js
availableMediaTypes = ['text/html', 'text/plain', 'application/json']
// The negotiator constructor receives a request object
negotiator = new Negotiator(request)
// Let's say Accept header is 'text/html, application/*;q=0.2, image/jpeg;q=0.8'
negotiator.mediaTypes()
// -> ['text/html', 'image/jpeg', 'application/*']
negotiator.mediaTypes(availableMediaTypes)
// -> ['text/html', 'application/json']
negotiator.mediaType(availableMediaTypes)
// -> 'text/html'You can check a working example at examples/accept.js.
Returns the most preferred media type from the client.
Returns the most preferred media type from a list of available media types.
Returns an array of preferred media types ordered by the client preference.
Returns an array of preferred media types ordered by priority from a list of available media types.
negotiator = new Negotiator(request)
availableLanguages = ['en', 'es', 'fr']
// Let's say Accept-Language header is 'en;q=0.8, es, pt'
negotiator.languages()
// -> ['es', 'pt', 'en']
negotiator.languages(availableLanguages)
// -> ['es', 'en']
language = negotiator.language(availableLanguages)
// -> 'es'You can check a working example at examples/language.js.
Returns the most preferred language from the client.
Returns the most preferred language from a list of available languages.
Returns an array of preferred languages ordered by the client preference.
Returns an array of preferred languages ordered by priority from a list of available languages.
availableCharsets = ['utf-8', 'iso-8859-1', 'iso-8859-5']
negotiator = new Negotiator(request)
// Let's say Accept-Charset header is 'utf-8, iso-8859-1;q=0.8, utf-7;q=0.2'
negotiator.charsets()
// -> ['utf-8', 'iso-8859-1', 'utf-7']
negotiator.charsets(availableCharsets)
// -> ['utf-8', 'iso-8859-1']
negotiator.charset(availableCharsets)
// -> 'utf-8'You can check a working example at examples/charset.js.
Returns the most preferred charset from the client.
Returns the most preferred charset from a list of available charsets.
Returns an array of preferred charsets ordered by the client preference.
Returns an array of preferred charsets ordered by priority from a list of available charsets.
availableEncodings = ['identity', 'gzip']
negotiator = new Negotiator(request)
// Let's say Accept-Encoding header is 'gzip, compress;q=0.2, identity;q=0.5'
negotiator.encodings()
// -> ['gzip', 'identity', 'compress']
negotiator.encodings(availableEncodings)
// -> ['gzip', 'identity']
negotiator.encoding(availableEncodings)
// -> 'gzip'You can check a working example at examples/encoding.js.
Returns the most preferred encoding from the client.
Returns the most preferred encoding from a list of available encodings.
Returns an array of preferred encodings ordered by the client preference.
Returns an array of preferred encodings ordered by priority from a list of available encodings.
The accepts module builds on this module and provides an alternative interface, mime type validation, and more.
Define a non-enumerable property on an object. Uses Reflect.defineProperty when available, otherwise Object.defineProperty.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
See the CHANGELOG for updates.
Params
object: The object on which to define the property.key: The name of the property to be defined or modified.value: The value or descriptor of the property being defined or modified.var define = require('define-property');
var obj = {};
define(obj, 'foo', function(val) {
return val.toUpperCase();
});
// by default, defined properties are non-enumberable
console.log(obj);
//=> {}
console.log(obj.foo('bar'));
//=> 'BAR'defining setters/getters
Pass the same properties you would if using Object.defineProperty or Reflect.defineProperty.
Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 28 | jonschlinkert |
| 1 | doowb |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on January 25, 2018. //: # “This README.md file is auto-generated, all changes to this file will be lost.” //: # “To regenerate it, use python -m synthtool.”
A simple utility for replacing the projectid token in objects.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
const {replaceProjectIdToken} = require('@google-cloud/projectify');
const options = {
projectId: '{{projectId}}',
};
replaceProjectIdToken(options, 'fake-project-id');Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.
| Sample | Source Code | Try it |
|---|---|---|
| Quickstart | source code | ![]() |
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.
Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).
Legacy Node.js versions are supported as a best effort:
legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.This library follows Semantic Versioning.
This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.
Apache Version 2.0
See LICENSE
An HTTP request client that provides an
axioslike interface over top ofnode-fetch.
const {request} = require('gaxios');
const res = await request({
url: 'https://www.googleapis.com/discovery/v1/apis/'
});Gaxios supports setting default properties both on the default instance, and on additional instances. This is often useful when making many requests to the same domain with the same base settings. For example:
const gaxios = require('gaxios');
gaxios.instance.defaults = {
baseURL: 'https://example.com'
headers: {
Authorization: 'SOME_TOKEN'
}
}
gaxios.request({url: '/data'}).then(...);{
// The url to which the request should be sent. Required.
url: string,
// The HTTP method to use for the request. Defaults to `GET`.
method: 'GET',
// The base Url to use for the request. Prepended to the `url` property above.
baseURL: 'https://example.com';
// The HTTP methods to be sent with the request.
headers: { 'some': 'header' },
// The data to send in the body of the request. Data objects will be
// serialized as JSON.
//
// Note: if you would like to provide a Content-Type header other than
// application/json you you must provide a string or readable stream, rather
// than an object:
// data: JSON.stringify({some: 'data'})
// data: fs.readFile('./some-data.jpeg')
data: {
some: 'data'
},
// The max size of the http response content in bytes allowed.
// Defaults to `0`, which is the same as unset.
maxContentLength: 2000,
// The max number of HTTP redirects to follow.
// Defaults to 100.
maxRedirects: 100,
// The querystring parameters that will be encoded using `qs` and
// appended to the url
params: {
querystring: 'parameters'
},
// By default, we use the `querystring` package in node core to serialize
// querystring parameters. You can override that and provide your
// own implementation.
paramsSerializer: (params) => {
return qs.stringify(params);
},
// The timeout for the HTTP request. Defaults to 0.
timeout: 1000,
// Optional method to override making the actual HTTP request. Useful
// for writing tests and instrumentation
adapter?: async (options, defaultAdapter) => {
const res = await defaultAdapter(options);
res.data = {
...res.data,
extraProperty: 'your extra property',
};
return res;
};
// The expected return type of the request. Options are:
// json | stream | blob | arraybuffer | text
// Defaults to `json`.
responseType: 'json',
// The node.js http agent to use for the request.
agent: someHttpsAgent,
// Custom function to determine if the response is valid based on the
// status code. Defaults to (>= 200 && < 300)
validateStatus: (status: number) => true,
// Implementation of `fetch` to use when making the API call. By default,
// will use the browser context if available, and fall back to `node-fetch`
// in node.js otherwise.
fetchImplementation?: typeof fetch;
// Configuration for retrying of requests.
retryConfig: {
// The number of times to retry the request. Defaults to 3.
retry?: number;
// The number of retries already attempted.
currentRetryAttempt?: number;
// The HTTP Methods that will be automatically retried.
// Defaults to ['GET','PUT','HEAD','OPTIONS','DELETE']
httpMethodsToRetry?: string[];
// The HTTP response status codes that will automatically be retried.
// Defaults to: [[100, 199], [429, 429], [500, 599]]
statusCodesToRetry?: number[][];
// Function to invoke when a retry attempt is made.
onRetryAttempt?: (err: GaxiosError) => Promise<void> | void;
// Function to invoke which determines if you should retry
shouldRetry?: (err: GaxiosError) => Promise<boolean> | boolean;
// When there is no response, the number of retries to attempt. Defaults to 2.
noResponseRetries?: number;
// The amount of time to initially delay the retry, in ms. Defaults to 100ms.
retryDelay?: number;
},
// Enables default configuration for retries.
retry: boolean,
// Cancelling a request requires the `abort-controller` library.
// See https://github.com/bitinn/node-fetch#request-cancellation-with-abortsignal
signal?: AbortSignal
}sprintf.js is a complete open source JavaScript sprintf implementation for the browser and node.js.
Its prototype is simple:
string sprintf(string format , [mixed arg1 [, mixed arg2 [ ,...]]])
The placeholders in the format string are marked by % and are followed by one or more of these elements, in this order:
$ sign that selects which argument index to use for the value. If not specified, arguments will be placed in the same order as the placeholders in the input string.+ sign that forces to preceed the result with a plus or minus sign on numeric values. By default, only the - sign is used on negative numbers.0 or any other character precedeed by a ' (single quote). The default is to pad with spaces.- sign, that causes sprintf to left-align the result of this placeholder. The default is to right-align the result.j (JSON) type specifier, the padding length specifies the tab size used for indentation.. (dot) followed by a number, that says how many digits should be displayed for floating point numbers. When used with the g type specifier, it specifies the number of significant digits. When used on a string, it causes the result to be truncated.% — yields a literal % characterb — yields an integer as a binary numberc — yields an integer as the character with that ASCII valued or i — yields an integer as a signed decimal numbere — yields a float using scientific notationu — yields an integer as an unsigned decimal numberf — yields a float as is; see notes on precision aboveg — yields a float as is; see notes on precision aboveo — yields an integer as an octal numbers — yields a string as isx — yields an integer as a hexadecimal number (lower-case)X — yields an integer as a hexadecimal number (upper-case)j — yields a JavaScript object or array as a JSON encoded stringvsprintfvsprintf is the same as sprintf except that it accepts an array of arguments, rather than a variable number of arguments:
vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"])
You can also swap the arguments. That is, the order of the placeholders doesn’t have to match the order of the arguments. You can do that by simply indicating in the format string which arguments the placeholders refer to:
sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants")
And, of course, you can repeat the placeholders without having to increase the number of arguments.
Format strings may contain replacement fields rather than positional placeholders. Instead of referring to a certain argument, you can now refer to a certain key within an object. Replacement fields are surrounded by rounded parentheses - ( and ) - and begin with a keyword that refers to a key:
var user = {
name: "Dolly"
}
sprintf("Hello %(name)s", user) // Hello Dolly
Keywords in replacement fields can be optionally followed by any number of keywords or indexes:
var users = [
{name: "Dolly"},
{name: "Molly"},
{name: "Polly"}
]
sprintf("Hello %(users[0].name)s, %(users[1].name)s and %(users[2].name)s", {users: users}) // Hello Dolly, Molly and Polly
Note: mixing positional and named placeholders is not (yet) supported
You can pass in a function as a dynamic value and it will be invoked (with no arguments) in order to compute the value on-the-fly.
sprintf("Current timestamp: %d", Date.now) // Current timestamp: 1398005382890
sprintf("Current date and time: %s", function() { return new Date().toString() })
You can now use sprintf and vsprintf (also aliased as fmt and vfmt respectively) in your AngularJS projects. See demo/.
bower install sprintf
npm install sprintf-js
var sprintf = require("sprintf-js").sprintf,
vsprintf = require("sprintf-js").vsprintf
sprintf("%2$s %3$s a %1$s", "cracker", "Polly", "wants")
vsprintf("The first 4 letters of the english alphabet are: %s, %s, %s and %s", ["a", "b", "c", "d"])
Estraverse (estraverse) is ECMAScript traversal functions from esmangle project.
You can find usage docs at wiki page.
The following code will output all variables declared at the root of a file.
estraverse.traverse(ast, {
enter: function (node, parent) {
if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration')
return estraverse.VisitorOption.Skip;
},
leave: function (node, parent) {
if (node.type == 'VariableDeclarator')
console.log(node.id.name);
}
});We can use this.skip, this.remove and this.break functions instead of using Skip, Remove and Break.
And estraverse provides estraverse.replace function. When returning node from enter/leave, current node is replaced with it.
result = estraverse.replace(tree, {
enter: function (node) {
// Replace it with replaced.
if (node.type === 'Literal')
return replaced;
}
});By passing visitor.keys mapping, we can extend estraverse traversing functionality.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Extending the existing traversing rules.
keys: {
// TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
TestExpression: ['argument']
}
});By passing visitor.fallback option, we can control the behavior when encountering unknown nodes.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Iterating the child **nodes** of unknown nodes.
fallback: 'iteration'
});When visitor.fallback is a function, we can determine which keys to visit on each node.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Skip the `argument` property of each node
fallback: function(node) {
return Object.keys(node).filter(function(key) {
return key !== 'argument';
});
}
});Estraverse (estraverse) is ECMAScript traversal functions from esmangle project.
You can find usage docs at wiki page.
The following code will output all variables declared at the root of a file.
estraverse.traverse(ast, {
enter: function (node, parent) {
if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration')
return estraverse.VisitorOption.Skip;
},
leave: function (node, parent) {
if (node.type == 'VariableDeclarator')
console.log(node.id.name);
}
});We can use this.skip, this.remove and this.break functions instead of using Skip, Remove and Break.
And estraverse provides estraverse.replace function. When returning node from enter/leave, current node is replaced with it.
result = estraverse.replace(tree, {
enter: function (node) {
// Replace it with replaced.
if (node.type === 'Literal')
return replaced;
}
});By passing visitor.keys mapping, we can extend estraverse traversing functionality.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Extending the existing traversing rules.
keys: {
// TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
TestExpression: ['argument']
}
});By passing visitor.fallback option, we can control the behavior when encountering unknown nodes.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Iterating the child **nodes** of unknown nodes.
fallback: 'iteration'
});When visitor.fallback is a function, we can determine which keys to visit on each node.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Skip the `argument` property of each node
fallback: function(node) {
return Object.keys(node).filter(function(key) {
return key !== 'argument';
});
}
});Estraverse (estraverse) is ECMAScript traversal functions from esmangle project.
You can find usage docs at wiki page.
The following code will output all variables declared at the root of a file.
estraverse.traverse(ast, {
enter: function (node, parent) {
if (node.type == 'FunctionExpression' || node.type == 'FunctionDeclaration')
return estraverse.VisitorOption.Skip;
},
leave: function (node, parent) {
if (node.type == 'VariableDeclarator')
console.log(node.id.name);
}
});We can use this.skip, this.remove and this.break functions instead of using Skip, Remove and Break.
And estraverse provides estraverse.replace function. When returning node from enter/leave, current node is replaced with it.
result = estraverse.replace(tree, {
enter: function (node) {
// Replace it with replaced.
if (node.type === 'Literal')
return replaced;
}
});By passing visitor.keys mapping, we can extend estraverse traversing functionality.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Extending the existing traversing rules.
keys: {
// TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
TestExpression: ['argument']
}
});By passing visitor.fallback option, we can control the behavior when encountering unknown nodes.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Iterating the child **nodes** of unknown nodes.
fallback: 'iteration'
});When visitor.fallback is a function, we can determine which keys to visit on each node.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
estraverse.traverse(tree, {
enter: function (node) { },
// Skip the `argument` property of each node
fallback: function(node) {
return Object.keys(node).filter(function(key) {
return key !== 'argument';
});
}
});Execute a callback when a HTTP request closes, finishes, or errors.
Attach a listener to listen for the response to finish. The listener will be invoked only once when the response finished. If the response finished to an error, the first argument will contain the error. If the response has already finished, the listener will be invoked.
Listening to the end of a response would be used to close things associated with the response, like open files.
Listener is invoked as listener(err, res).
onFinished(res, function (err, res) {
// clean up open fds, etc.
// err contains the error is request error'd
})Attach a listener to listen for the request to finish. The listener will be invoked only once when the request finished. If the request finished to an error, the first argument will contain the error. If the request has already finished, the listener will be invoked.
Listening to the end of a request would be used to know when to continue after reading the data.
Listener is invoked as listener(err, req).
var data = ''
req.setEncoding('utf8')
res.on('data', function (str) {
data += str
})
onFinished(req, function (err, req) {
// data is read unless there is err
})Determine if res is already finished. This would be useful to check and not even start certain operations if the response has already finished.
Determine if req is already finished. This would be useful to check and not even start certain operations if the request has already finished.
The meaning of the CONNECT method from RFC 7231, section 4.3.6:
The CONNECT method requests that the recipient establish a tunnel to the destination origin server identified by the request-target and, if successful, thereafter restrict its behavior to blind forwarding of packets, in both directions, until the tunnel is closed. Tunnels are commonly used to create an end-to-end virtual connection, through one or more proxies, which can then be secured using TLS (Transport Layer Security, [RFC5246]).
In Node.js, these request objects come from the 'connect' event on the HTTP server.
When this module is used on a HTTP CONNECT request, the request is considered “finished” immediately, due to limitations in the Node.js interface. This means if the CONNECT request contains a request entity, the request will be considered “finished” even before it has been read.
There is no such thing as a response object to a CONNECT request in Node.js, so there is no support for for one.
The meaning of the Upgrade header from RFC 7230, section 6.1:
The “Upgrade” header field is intended to provide a simple mechanism for transitioning from HTTP/1.1 to some other protocol on the same connection.
In Node.js, these request objects come from the 'upgrade' event on the HTTP server.
When this module is used on a HTTP request with an Upgrade header, the request is considered “finished” immediately, due to limitations in the Node.js interface. This means if the Upgrade request contains a request entity, the request will be considered “finished” even before it has been read.
There is no such thing as a response object to a Upgrade request in Node.js, so there is no support for for one.
The following code ensures that file descriptors are always closed once the response finishes.
var destroy = require('destroy')
var http = require('http')
var onFinished = require('on-finished')
http.createServer(function onRequest(req, res) {
var stream = fs.createReadStream('package.json')
stream.pipe(res)
onFinished(res, function (err) {
destroy(stream)
})
})Punycode.js is a robust Punycode converter that fully complies to RFC 3492 and RFC 5891.
This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:
punycode.c by Markus W. Scherer (IBM)punycode.c by Ben Noordhuispunycode.js by Ben Noordhuis (note: not fully compliant)This project was bundled with Node.js from v0.6.2+ until v7 (soft-deprecated).
The current version supports recent versions of Node.js only. It provides a CommonJS module and an ES6 module. For the old version that offers the same functionality with broader support, including Rhino, Ringo, Narwhal, and web browsers, see v1.4.1.
Via npm:
In Node.js:
punycode.decode(string)Converts a Punycode string of ASCII symbols to a string of Unicode symbols.
// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'punycode.encode(string)Converts a string of Unicode symbols to a Punycode string of ASCII symbols.
// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'punycode.toUnicode(input)Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode.
// decode domain names
punycode.toUnicode('xn--maana-pta.com');
// → 'mañana.com'
punycode.toUnicode('xn----dqo34k.com');
// → '☃-⌘.com'
// decode email addresses
punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq');
// → 'джумла@джpумлатест.bрфa'punycode.toASCII(input)Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII.
// encode domain names
punycode.toASCII('mañana.com');
// → 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com');
// → 'xn----dqo34k.com'
// encode email addresses
punycode.toASCII('джумла@джpумлатест.bрфa');
// → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'punycode.ucs2punycode.ucs2.decode(string)Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.
punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]punycode.ucs2.encode(codePoints)Creates a string based on an array of numeric code point values.
punycode.ucs2.encode([0x61, 0x62, 0x63]);
// → 'abc'
punycode.ucs2.encode([0x1D306]);
// → '\uD834\uDF06'punycode.versionA string representing the current Punycode.js version number.
| Mathias Bynens |
Map
visitover an array of objects.
Install with npm:
Assign/Merge/Extend vs. Visit
Let’s say you want to add a set method to your application that will:
data objectdata objectExample using extend
Here is one way to accomplish this using Lo-Dash’s extend (comparable to Object.assign):
var _ = require('lodash');
var obj = {
data: {},
set: function (key, value) {
if (Array.isArray(key)) {
_.extend.apply(_, [obj.data].concat(key));
} else if (typeof key === 'object') {
_.extend(obj.data, key);
} else {
obj.data[key] = value;
}
}
};
obj.set('a', 'a');
obj.set([{b: 'b'}, {c: 'c'}]);
obj.set({d: {e: 'f'}});
console.log(obj.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }}The above approach works fine for most use cases. However, if you also want to emit an event each time a property is added to the data object, or you want more control over what happens as the object is extended, a better approach would be to use visit.
Example using visit
In this approach:
set, the mapVisit library calls the set method on each object in the array.visit calls set on each property in the object.As a result, the data event will be emitted every time a property is added to data (events are just an example, you can use this approach to perform any necessary logic every time the method is called).
var mapVisit = require('map-visit');
var visit = require('object-visit');
var obj = {
data: {},
set: function (key, value) {
if (Array.isArray(key)) {
mapVisit(obj, 'set', key);
} else if (typeof key === 'object') {
visit(obj, 'set', key);
} else {
// simulate an event-emitter
console.log('emit', key, value);
obj.data[key] = value;
}
}
};
obj.set('a', 'a');
obj.set([{b: 'b'}, {c: 'c'}]);
obj.set({d: {e: 'f'}});
obj.set({g: 'h', i: 'j', k: 'l'});
console.log(obj.data);
//=> {a: 'a', b: 'b', c: 'c', d: { e: 'f' }, g: 'h', i: 'j', k: 'l'}
// events would look something like:
// emit a a
// emit b b
// emit c c
// emit d { e: 'f' }
// emit g h
// emit i j
// emit k lPull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 15 | jonschlinkert |
| 7 | doowb |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.5.0, on April 09, 2017. # @nodelib/fs.scandir
List files and directories inside the specified directory.
The package is aimed at obtaining information about entries in the directory.
name, path, dirent and stats (optional).old and modern mode.npm install @nodelib/fs.scandir
import * as fsScandir from '@nodelib/fs.scandir';
fsScandir.scandir('path', (error, stats) => { /* … */ });Returns an array of plain objects (Entry) with information about entry for provided path with standard callback-style.
fsScandir.scandir('path', (error, entries) => { /* … */ });
fsScandir.scandir('path', {}, (error, entries) => { /* … */ });
fsScandir.scandir('path', new fsScandir.Settings(), (error, entries) => { /* … */ });Returns an array of plain objects (Entry) with information about entry for provided path.
const entries = fsScandir.scandirSync('path');
const entries = fsScandir.scandirSync('path', {});
const entries = fsScandir.scandirSync(('path', new fsScandir.Settings());truestring | Buffer | URLA path to a file. If a URL is provided, it must use the file: protocol.
falseOptions | SettingsSettings classAn Options object or an instance of Settings class.
:book: When you pass a plain object, an instance of the
Settingsclass will be created automatically. If you plan to call the method frequently, use a pre-created instance of theSettingsclass.
A class of full settings of the package.
const settings = new fsScandir.Settings({ followSymbolicLinks: false });
const entries = fsScandir.scandirSync('path', settings);name — The name of the entry (unknown.txt).path — The path of the entry relative to call directory (root/unknown.txt).dirent — An instance of fs.Dirent class. On Node.js below 10.10 will be emulated by DirentFromStats class.stats (optional) — An instance of fs.Stats class.For example, the scandir call for tools directory with one directory inside:
booleanfalseAdds an instance of fs.Stats class to the Entry.
:book: Always use
fs.readdirwithout thewithFileTypesoption. ??TODO??
booleanfalseFollow symbolic links or not. Call fs.stat on symbolic link if true.
throwErrorOnBrokenSymbolicLinkbooleantrueThrow an error when symbolic link is broken if true or safely use lstat call if false.
pathSegmentSeparatorstringpath.sepBy default, this package uses the correct path separator for your OS (\ on Windows, / on Unix-like systems). But you can set this option to any separator character(s) that you want to use instead.
fsFileSystemAdapterBy default, the built-in Node.js module (fs) is used to work with the file system. You can replace any method with your own.
interface FileSystemAdapter {
lstat?: typeof fs.lstat;
stat?: typeof fs.stat;
lstatSync?: typeof fs.lstatSync;
statSync?: typeof fs.statSync;
readdir?: typeof fs.readdir;
readdirSync?: typeof fs.readdirSync;
}
const settings = new fsScandir.Settings({
fs: { lstat: fakeLstat }
});old and modern modeThis package has two modes that are used depending on the environment and parameters of use.
10.10 or when the stats option is enabledWhen working in the old mode, the directory is read first (fs.readdir), then the type of entries is determined (fs.lstat and/or fs.stat for symbolic links).
stats option is disabledIn the modern mode, reading the directory (fs.readdir with the withFileTypes option) is combined with obtaining information about its entries. An additional call for symbolic links (fs.stat) is still present.
This mode makes fewer calls to the file system. It’s faster.
See the Releases section of our GitHub project for changelog for each release version.
http.Agent instanceThis module provides an http.Agent generator. That is, you pass it an async callback function, and it returns a new http.Agent instance that will invoke the given callback function when sending outbound HTTP requests.
Here’s some more interesting uses of agent-base. Send a pull request to list yours!
http-proxy-agent: An HTTP(s) proxy http.Agent implementation for HTTP endpointshttps-proxy-agent: An HTTP(s) proxy http.Agent implementation for HTTPS endpointspac-proxy-agent: A PAC file proxy http.Agent implementation for HTTP and HTTPSsocks-proxy-agent: A SOCKS proxy http.Agent implementation for HTTP and HTTPSInstall with npm:
Here’s a minimal example that creates a new net.Socket connection to the server for every HTTP request (i.e. the equivalent of agent: false option):
var net = require('net');
var tls = require('tls');
var url = require('url');
var http = require('http');
var agent = require('agent-base');
var endpoint = 'http://nodejs.org/api/';
var parsed = url.parse(endpoint);
// This is the important part!
parsed.agent = agent(function (req, opts) {
var socket;
// `secureEndpoint` is true when using the https module
if (opts.secureEndpoint) {
socket = tls.connect(opts);
} else {
socket = net.connect(opts);
}
return socket;
});
// Everything else works just like normal...
http.get(parsed, function (res) {
console.log('"response" event!', res.headers);
res.pipe(process.stdout);
});Returning a Promise or using an async function is also supported:
Return another http.Agent instance to “pass through” the responsibility for that HTTP request to that agent:
Creates a base http.Agent that will execute the callback function callback for every HTTP request that it is used as the agent for. The callback function is responsible for creating a stream.Duplex instance of some kind that will be used as the underlying socket in the HTTP request.
The options object accepts the following properties:
timeout - Number - Timeout for the callback() function in milliseconds. Defaults to Infinity (optional).The callback function should have the following signature:
The ClientRequest req can be accessed to read request headers and and the path, etc. The options object contains the options passed to the http.request()/https.request() function call, and is formatted to be directly passed to net.connect()/tls.connect(), or however else you want a Socket to be created. Pass the created socket to the callback function cb once created, and the HTTP request will continue to proceed.
If the https module is used to invoke the HTTP request, then the secureEndpoint property on options will be set to true.
Repeat the given string n times. Fastest implementation for repeating a string.
Install with npm:
Repeat the given string the specified number of times.
Example:
Example
Params
string {String}: The string to repeatnumber {Number}: The number of times to repeat the stringreturns {String}: Repeated stringRepeat string is significantly faster than the native method (which is itself faster than repeating):
# 2x
repeat-string █████████████████████████ (26,953,977 ops/sec)
repeating █████████ (9,855,695 ops/sec)
native ██████████████████ (19,453,895 ops/sec)
# 3x
repeat-string █████████████████████████ (19,445,252 ops/sec)
repeating ███████████ (8,661,565 ops/sec)
native ████████████████████ (16,020,598 ops/sec)
# 10x
repeat-string █████████████████████████ (23,792,521 ops/sec)
repeating █████████ (8,571,332 ops/sec)
native ███████████████ (14,582,955 ops/sec)
# 50x
repeat-string █████████████████████████ (23,640,179 ops/sec)
repeating █████ (5,505,509 ops/sec)
native ██████████ (10,085,557 ops/sec)
# 250x
repeat-string █████████████████████████ (23,489,618 ops/sec)
repeating ████ (3,962,937 ops/sec)
native ████████ (7,724,892 ops/sec)
# 2000x
repeat-string █████████████████████████ (20,315,172 ops/sec)
repeating ████ (3,297,079 ops/sec)
native ███████ (6,203,331 ops/sec)
# 20000x
repeat-string █████████████████████████ (23,382,915 ops/sec)
repeating ███ (2,980,058 ops/sec)
native █████ (5,578,808 ops/sec)Run the benchmarks
Install dev dependencies:
repeat-element: Create an array by repeating the given value n times. | homepage
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 51 | jonschlinkert |
| 2 | LinusU |
| 2 | tbusser |
| 1 | doowb |
| 1 | wooorm |
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.2.0, on October 23, 2016. # Tapable
Tapable is a class for plugin binding and applying.
Just extend it.
function MyClass() {
Tapable.call(this);
}
MyClass.prototype = Object.create(Tapable.prototype);
MyClass.prototype.method = function() {};Or mix it in.
function MyClass2() {
EventEmitter.call(this);
Tapable.call(this);
}
MyClass2.prototype = Object.create(EventEmitter.prototype);
Tapable.mixin(MyClass2.prototype);
MyClass2.prototype.method = function() {};Attaches all plugins passed as arguments to the instance, by calling apply on them.
names are the names (or a single name) of the plugin interfaces the class provides.
handler is a callback function. The signature depends on the class. this is the instance of the class.
Should only be called from a handler function.
It restarts the process of applying handers.
Synchronous applies all registered handers for name. The handler functions are called with all args.
Synchronous applies all registered handers for name. The handler functions are called with the return value of the previous handler and all args. For the first handler init is used and the return value of the last handler is return by applyPluginsWaterfall
Asynchronously applies all registered handers for name. The handler functions are called with all args and a callback function with the signature (err?: Error) -> void. The hander functions are called in order of registration.
callback is called after all handlers are called.
Synchronous applies all registered handers for name. The handler function are called with all args. If a handler function returns something !== undefined, the value is returned and no more handers are applied.
Asynchronously applies all registered handers for name. The hander functions are called with the current value and a callback function with the signature (err: Error, nextValue: any) -> void. When called nextValue is the current value for the next handler. The current value for the first handler is init. After all handlers are applied, callback is called with the last value. If any handler passes a value for err, the callback is called with this error and no more handlers are called.
Asynchronously applies all registered handers for name. The hander functions are called with all args and a callback function with the signature (err: Error) -> void. The handers are called in series, one at a time. After all handlers are applied, callback is called. If any handler passes a value for err, the callback is called with this error and no more handlers are called.
Applies all registered handlers for name parallel. The handler functions are called with all args and a callback function with the signature (err?: Error) -> void. The callback function is called when all handlers called the callback without err. If any handler calls the callback with err, callback is invoked with this error and the other handlers are ignored.
restartApplyPlugins cannot be used.
applyPluginsParallelBailResult(
name: string,
args: any...,
callback: (err: Error, result: any) -> void
)Applies all registered handlers for name parallel. The handler functions are called with all args and a callback function with the signature (err?: Error) -> void. Handler functions must call the callback. They can either pass an error, or pass undefined, or pass an value. The first result (either error or value) with is not undefined is passed to the callback. The order is defined by registeration not by speed of the handler function. This function compentate this.
restartApplyPlugins cannot be used.
A highly performant queue implementation in javascript.
// empty queue
const queue = Queue.fromArray([]);
// with elements
const list = [10, 3, 8, 40, 1];
const queue = Queue.fromArray(list);
// If the list should not be mutated, simply construct the queue from a copy of it.
const queue = Queue.fromArray(list.slice(0));adds an element at the back of the queue.
| params | |
|---|---|
| name | type |
| element | object |
| runtime |
|---|
| O(1) |
peeks on the front element of the queue.
| return |
|---|
| object |
| runtime |
|---|
| O(1) |
peeks on the back element in the queue.
| return |
|---|
| object |
| runtime |
|---|
| O(1) |
dequeue the front element in the queue. It does not use .shift() to dequeue an element. Instead, it uses a pointer to get the front element and only remove elements when reaching half size of the queue.
| return |
|---|
| object |
| runtime |
|---|
| O(n*log(n)) |
Dequeuing all elements takes O(n*log(n)) instead of O(n2) if using shift().
Here’s a benchmark:
| dequeuing 1 million elements in Node v12 | |
| .dequeue() | .shift() |
| ~ 40 ms | ~ 3 minutes |
checks if the queue is empty.
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
returns the number of elements in the queue.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
creates a shallow copy of the queue.
| return |
|---|
| Queue |
| runtime |
|---|
| O(n) |
const queue = Queue.fromArray([{ id: 2 }, { id: 4 } , { id: 8 }]);
const clone = queue.clone();
clone.dequeue();
console.log(queue.front()); // { id: 2 }
console.log(clone.front()); // { id: 4 }returns a copy of the remaining elements as an array.
| return |
|---|
| array |
| runtime |
|---|
| O(n) |
clears all elements from the queue.
| runtime |
|---|
| O(1) |
lint + tests
Esrecurse (esrecurse) is ECMAScript recursive traversing functionality.
The following code will output all variables declared at the root of a file.
esrecurse.visit(ast, {
XXXStatement: function (node) {
this.visit(node.left);
// do something...
this.visit(node.right);
}
});We can use Visitor instance.
var visitor = new esrecurse.Visitor({
XXXStatement: function (node) {
this.visit(node.left);
// do something...
this.visit(node.right);
}
});
visitor.visit(ast);We can inherit Visitor instance easily.
function DerivedVisitor() {
esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */);
}
util.inherits(DerivedVisitor, esrecurse.Visitor);
DerivedVisitor.prototype.XXXStatement = function (node) {
this.visit(node.left);
// do something...
this.visit(node.right);
};And you can invoke default visiting operation inside custom visit operation.
function DerivedVisitor() {
esrecurse.Visitor.call(/* this for constructor */ this /* visitor object automatically becomes this. */);
}
util.inherits(DerivedVisitor, esrecurse.Visitor);
DerivedVisitor.prototype.XXXStatement = function (node) {
// do something...
this.visitChildren(node);
};The childVisitorKeys option does customize the behaviour of this.visitChildren(node). We can use user-defined node types.
// This tree contains a user-defined `TestExpression` node.
var tree = {
type: 'TestExpression',
// This 'argument' is the property containing the other **node**.
argument: {
type: 'Literal',
value: 20
},
// This 'extended' is the property not containing the other **node**.
extended: true
};
esrecurse.visit(
ast,
{
Literal: function (node) {
// do something...
}
},
{
// Extending the existing traversing rules.
childVisitorKeys: {
// TargetNodeName: [ 'keys', 'containing', 'the', 'other', '**node**' ]
TestExpression: ['argument']
}
}
);We can use the fallback option as well. If the fallback option is "iteration", esrecurse would visit all enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. node.parent).
esrecurse.visit(
ast,
{
Literal: function (node) {
// do something...
}
},
{
fallback: 'iteration'
}
);If the fallback option is a function, esrecurse calls this function to determine the enumerable properties of unknown nodes. Please note circular references cause the stack overflow. AST might have circular references in additional properties for some purpose (e.g. node.parent).
esrecurse.visit(
ast,
{
Literal: function (node) {
// do something...
}
},
{
fallback: function (node) {
return Object.keys(node).filter(function(key) {
return key !== 'argument'
});
}
}
);A result paging utility used by Google node.js modules
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.
| Sample | Source Code | Try it |
|---|---|---|
| Quickstart | source code | ![]() |
The Google Cloud Common Paginator Node.js Client API Reference documentation also contains samples.
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.
Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).
Legacy Node.js versions are supported as a best effort:
legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.This library follows Semantic Versioning.
This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.
Apache Version 2.0
See LICENSE
Infer the content-type of a request.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
var http = require('http')
var typeis = require('type-is')
http.createServer(function (req, res) {
var istext = typeis(req, ['text/*'])
res.end('you ' + (istext ? 'sent' : 'did not send') + ' me text')
})Checks if the request is one of the types. If the request has no body, even if there is a Content-Type header, then null is returned. If the Content-Type header is invalid or does not matches any of the types, then false is returned. Otherwise, a string of the type that matched is returned.
The request argument is expected to be a Node.js HTTP request. The types argument is an array of type strings.
Each type in the types array can be one of the following:
json. This name will be returned if matched.application/json.*/* or */json or application/*. The full mime type will be returned if matched.+json. This can be combined with a wildcard such as */vnd+json or application/*+json. The full mime type will be returned if matched.Some examples to illustrate the inputs and returned value:
// req.headers.content-type = 'application/json'
typeis(req, ['json']) // => 'json'
typeis(req, ['html', 'json']) // => 'json'
typeis(req, ['application/*']) // => 'application/json'
typeis(req, ['application/json']) // => 'application/json'
typeis(req, ['html']) // => falseReturns a Boolean if the given request has a body, regardless of the Content-Type header.
Having a body has no relation to how large the body is (it may be 0 bytes). This is similar to how file existence works. If a body does exist, then this indicates that there is data to read from the Node.js request stream.
if (typeis.hasBody(req)) {
// read the body, since there is one
req.on('data', function (chunk) {
// ...
})
}Checks if the mediaType is one of the types. If the mediaType is invalid or does not matches any of the types, then false is returned. Otherwise, a string of the type that matched is returned.
The mediaType argument is expected to be a media type string. The types argument is an array of type strings.
Each type in the types array can be one of the following:
json. This name will be returned if matched.application/json.*/* or */json or application/*. The full mime type will be returned if matched.+json. This can be combined with a wildcard such as */vnd+json or application/*+json. The full mime type will be returned if matched.Some examples to illustrate the inputs and returned value:
var mediaType = 'application/json'
typeis.is(mediaType, ['json']) // => 'json'
typeis.is(mediaType, ['html', 'json']) // => 'json'
typeis.is(mediaType, ['application/*']) // => 'application/json'
typeis.is(mediaType, ['application/json']) // => 'application/json'
typeis.is(mediaType, ['html']) // => falsevar express = require('express')
var typeis = require('type-is')
var app = express()
app.use(function bodyParser (req, res, next) {
if (!typeis.hasBody(req)) {
return next()
}
switch (typeis(req, ['urlencoded', 'json', 'multipart'])) {
case 'urlencoded':
// parse urlencoded body
throw new Error('implement urlencoded body parsing')
case 'json':
// parse json body
throw new Error('implement json body parsing')
case 'multipart':
// parse multipart body
throw new Error('implement multipart body parsing')
default:
// 415 error code
res.statusCode = 415
res.end()
break
}
})Super simple cache for file metadata, useful for process that work o a given series of files and that only need to repeat the job on the changed ones since the previous run of the process — Edit
The module exposes two functions create and createFromFile.
create(cacheName, [directory, useCheckSum])createFromFile(pathToCache, [useCheckSum])// loads the cache, if one does not exists for the given
// Id a new one will be prepared to be created
var fileEntryCache = require('file-entry-cache');
var cache = fileEntryCache.create('testCache');
var files = expand('../fixtures/*.txt');
// the first time this method is called, will return all the files
var oFiles = cache.getUpdatedFiles(files);
// this will persist this to disk checking each file stats and
// updating the meta attributes `size` and `mtime`.
// custom fields could also be added to the meta object and will be persisted
// in order to retrieve them later
cache.reconcile();
// use this if you want the non visited file entries to be kept in the cache
// for more than one execution
//
// cache.reconcile( true /* noPrune */)
// on a second run
var cache2 = fileEntryCache.create('testCache');
// will return now only the files that were modified or none
// if no files were modified previous to the execution of this function
var oFiles = cache.getUpdatedFiles(files);
// if you want to prevent a file from being considered non modified
// something useful if a file failed some sort of validation
// you can then remove the entry from the cache doing
cache.removeEntry('path/to/file'); // path to file should be the same path of the file received on `getUpdatedFiles`
// that will effectively make the file to appear again as modified until the validation is passed. In that
// case you should not remove it from the cache
// if you need all the files, so you can determine what to do with the changed ones
// you can call
var oFiles = cache.normalizeEntries(files);
// oFiles will be an array of objects like the following
entry = {
key: 'some/name/file', the path to the file
changed: true, // if the file was changed since previous run
meta: {
size: 3242, // the size of the file
mtime: 231231231, // the modification time of the file
data: {} // some extra field stored for this file (useful to save the result of a transformation on the file
}
}I needed a super simple and dumb in-memory cache with optional disk persistence (write-back cache) in order to make a script that will beautify files with esformatter to execute only on the files that were changed since the last run.
In doing so the process of beautifying files was reduced from several seconds to a small fraction of a second.
This module uses flat-cache a super simple key/value cache storage with optional file persistance.
The main idea is to read the files when the task begins, apply the transforms required, and if the process succeed, then store the new state of the files. The next time this module request for getChangedFiles will return only the files that were modified. Making the process to end faster.
This module could also be used by processes that modify the files applying a transform, in that case the result of the transform could be stored in the meta field, of the entries. Anything added to the meta field will be persisted. Those processes won’t need to call getChangedFiles they will instead call normalizeEntries that will return the entries with a changed field that can be used to determine if the file was changed or not. If it was not changed the transformed stored data could be used instead of actually applying the transformation, saving time in case of only a few files changed.
In the worst case scenario all the files will be processed. In the best case scenario only a few of them will be processed.
stringify-able ones if possible, flat-cache uses circular-json to try to persist circular structures, but this should be considered experimental. The best results are always obtained with non circular valuesConvert Google .p12 keys to .pem keys.
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
const {getPem} = require('google-p12-pem');
/**
* Given a p12 file, convert it to the PEM format.
* @param {string} pathToCert The relative path to a p12 file.
*/
async function quickstart() {
// TODO(developer): provide the path to your cert
// const pathToCert = 'path/to/cert.p12';
const pem = await getPem(pathToCert);
console.log('The converted PEM:');
console.log(pem);
}
quickstart();Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.
| Sample | Source Code | Try it |
|---|---|---|
| Quickstart | source code | ![]() |
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.
Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).
Legacy Node.js versions are supported as a best effort:
legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.This library follows Semantic Versioning.
This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.
Apache Version 2.0
See LICENSE
Returns
trueif the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience.
Install with npm:
You might also be interested in is-valid-glob and has-glob.
True
Patterns that have glob characters or regex patterns will return true:
isGlob('!foo.js');
isGlob('*.js');
isGlob('**/abc.js');
isGlob('abc/*.js');
isGlob('abc/(aaa|bbb).js');
isGlob('abc/[a-z].js');
isGlob('abc/{a,b}.js');
isGlob('abc/?.js');
//=> trueExtglobs
isGlob('abc/@(a).js');
isGlob('abc/!(a).js');
isGlob('abc/+(a).js');
isGlob('abc/*(a).js');
isGlob('abc/?(a).js');
//=> trueFalse
Escaped globs or extglobs return false:
isGlob('abc/\\@(a).js');
isGlob('abc/\\!(a).js');
isGlob('abc/\\+(a).js');
isGlob('abc/\\*(a).js');
isGlob('abc/\\?(a).js');
isGlob('\\!foo.js');
isGlob('\\*.js');
isGlob('\\*\\*/abc.js');
isGlob('abc/\\*.js');
isGlob('abc/\\(aaa|bbb).js');
isGlob('abc/\\[a-z].js');
isGlob('abc/\\{a,b}.js');
isGlob('abc/\\?.js');
//=> falsePatterns that do not have glob patterns return false:
isGlob('abc.js');
isGlob('abc/def/ghi.js');
isGlob('foo.js');
isGlob('abc/@.js');
isGlob('abc/+.js');
isGlob();
isGlob(null);
//=> falseArrays are also false (If you want to check if an array has a glob pattern, use has-glob):
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 40 | jonschlinkert |
| 1 | tuvistavie |
(This document was generated by verb-generate-readme (a verb generator), please don’t edit the readme directly. Any changes to the readme must be made in .verb.md.)
To generate the readme and API documentation with verb:
Install dev dependencies:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.1.31, on October 12, 2016. # content-disposition
Create and parse HTTP Content-Disposition header
Create an attachment Content-Disposition header value using the given file name, if supplied. The filename is optional and if no file name is desired, but you want to specify options, set filename to undefined.
note HTTP headers are of the ISO-8859-1 character set. If you are writing this header through a means different from setHeader in Node.js, you’ll want to specify the 'binary' encoding in Node.js.
contentDisposition accepts these properties in the options object.
If the filename option is outside ISO-8859-1, then the file name is actually stored in a supplemental field for clients that support Unicode file names and a ISO-8859-1 version of the file name is automatically generated.
This specifies the ISO-8859-1 file name to override the automatic generation or disables the generation all together, defaults to true.
false will disable including a ISO-8859-1 file name and only include the Unicode version (unless the file name is already ISO-8859-1).true will enable automatic generation if the file name is outside ISO-8859-1.If the filename option is ISO-8859-1 and this option is specified and has a different value, then the filename option is encoded in the extended field and this set as the fallback field, even though they are both ISO-8859-1.
Specifies the disposition type, defaults to "attachment". This can also be "inline", or any other value (all values except inline are treated like attachment, but can convey additional information if both parties agree to it). The type is normalized to lower-case.
Parse a Content-Disposition header string. This automatically handles extended (“Unicode”) parameters by decoding them and providing them under the standard parameter name. This will return an object with the following properties (examples are shown for the string 'attachment; filename="EURO rates.txt"; filename*=UTF-8\'\'%e2%82%ac%20rates.txt'):
type: The disposition type (always lower case). Example: 'attachment'
parameters: An object of the parameters in the disposition (name of parameter always lower case and extended versions replace non-extended versions). Example: {filename: "€ rates.txt"}
var contentDisposition = require('content-disposition')
var destroy = require('destroy')
var fs = require('fs')
var http = require('http')
var onFinished = require('on-finished')
var filePath = '/path/to/public/plans.pdf'
http.createServer(function onRequest (req, res) {
// set headers
res.setHeader('Content-Type', 'application/pdf')
res.setHeader('Content-Disposition', contentDisposition(filePath))
// send file
var stream = fs.createReadStream(filePath)
stream.pipe(res)
onFinished(res, function () {
destroy(stream)
})
})Delete nested properties from an object using dot notation.
Install with npm:
var unset = require('unset-value');
var obj = {a: {b: {c: 'd', e: 'f'}}};
unset(obj, 'a.b.c');
console.log(obj);
//=> {a: {b: {e: 'f'}}};true when a property does not existThis is consistent with delete behavior in that it does not throw when a property does not exist.
var one = {a: {b: {c: 'd'}}};
unset(one, 'a.b');
console.log(one);
//=> {a: {}}
var two = {a: {b: {c: 'd'}}};
unset(two, 'a.b.c');
console.log(two);
//=> {a: {b: {}}}
var three = {a: {b: {c: 'd', e: 'f'}}};
unset(three, 'a.b.c');
console.log(three);
//=> {a: {b: {e: 'f'}}}a.b.c) to get a nested value from an object. | homepagea.b.c and… more | homepage'a.b.c') paths. | homepage'a.b.c') paths. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 6 | jonschlinkert |
| 2 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.4.2, on February 25, 2017. # is-accessor-descriptor
Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
You may also pass an object and property name to check if the property is an accessor:
false when not an object
true when the object has valid properties
and the properties all have the correct JavaScript types:
false when the object has invalid properties
isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> falsefalse when an accessor is not a function
isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> falsefalse when a value is not the correct type
isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
Object constructor. | homepage| Commits | Contributor |
|---|---|
| 22 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-accessor-descriptor
Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
You may also pass an object and property name to check if the property is an accessor:
false when not an object
true when the object has valid properties
and the properties all have the correct JavaScript types:
false when the object has invalid properties
isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> falsefalse when an accessor is not a function
isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> falsefalse when a value is not the correct type
isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
Object constructor. | homepage| Commits | Contributor |
|---|---|
| 22 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-accessor-descriptor
Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
You may also pass an object and property name to check if the property is an accessor:
false when not an object
true when the object has valid properties
and the properties all have the correct JavaScript types:
false when the object has invalid properties
isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> falsefalse when an accessor is not a function
isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> falsefalse when a value is not the correct type
isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
Object constructor. | homepage| Commits | Contributor |
|---|---|
| 22 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-accessor-descriptor
Returns true if a value has the characteristics of a valid JavaScript accessor descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
You may also pass an object and property name to check if the property is an accessor:
false when not an object
true when the object has valid properties
and the properties all have the correct JavaScript types:
false when the object has invalid properties
isAccessor({get: noop, set: noop, bar: 'baz'})
isAccessor({get: noop, writable: true})
isAccessor({get: noop, value: true})
//=> falsefalse when an accessor is not a function
isAccessor({get: noop, set: 'baz'})
isAccessor({get: 'foo', set: noop})
isAccessor({get: 'foo', bar: 'baz'})
isAccessor({get: 'foo', set: 'baz'})
//=> falsefalse when a value is not the correct type
isAccessor({get: noop, set: noop, enumerable: 'foo'})
isAccessor({set: noop, configurable: 'foo'})
isAccessor({get: noop, configurable: 'foo'})
//=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
Object constructor. | homepage| Commits | Contributor |
|---|---|
| 22 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor
Returns true if a value has the characteristics of a valid JavaScript data descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
true when the descriptor has valid properties with valid values.
// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> truefalse when not an object
false when the object has invalid properties
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> falsefalse when a value is not the correct type
isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> falseThe only valid data descriptor properties are the following:
configurable (required)enumerable (required)value (optional)writable (optional)To be a valid data descriptor, either value or writable must be defined.
Invalid properties
A descriptor may have additional invalid properties (an error will not be thrown).
var foo = {};
Object.defineProperty(foo, 'bar', {
enumerable: true,
whatever: 'blah', // invalid, but doesn't cause an error
get: function() {
return 'baz';
}
});
console.log(foo.bar);
//=> 'baz'Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 21 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor
Returns true if a value has the characteristics of a valid JavaScript data descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
true when the descriptor has valid properties with valid values.
// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> truefalse when not an object
false when the object has invalid properties
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> falsefalse when a value is not the correct type
isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> falseThe only valid data descriptor properties are the following:
configurable (required)enumerable (required)value (optional)writable (optional)To be a valid data descriptor, either value or writable must be defined.
Invalid properties
A descriptor may have additional invalid properties (an error will not be thrown).
var foo = {};
Object.defineProperty(foo, 'bar', {
enumerable: true,
whatever: 'blah', // invalid, but doesn't cause an error
get: function() {
return 'baz';
}
});
console.log(foo.bar);
//=> 'baz'Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 21 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor
Returns true if a value has the characteristics of a valid JavaScript data descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
true when the descriptor has valid properties with valid values.
// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> truefalse when not an object
false when the object has invalid properties
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> falsefalse when a value is not the correct type
isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> falseThe only valid data descriptor properties are the following:
configurable (required)enumerable (required)value (optional)writable (optional)To be a valid data descriptor, either value or writable must be defined.
Invalid properties
A descriptor may have additional invalid properties (an error will not be thrown).
var foo = {};
Object.defineProperty(foo, 'bar', {
enumerable: true,
whatever: 'blah', // invalid, but doesn't cause an error
get: function() {
return 'baz';
}
});
console.log(foo.bar);
//=> 'baz'Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 21 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017. # is-data-descriptor
Returns true if a value has the characteristics of a valid JavaScript data descriptor.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
true when the descriptor has valid properties with valid values.
// `value` can be anything
isDataDesc({value: 'foo'})
isDataDesc({value: function() {}})
isDataDesc({value: true})
//=> truefalse when not an object
false when the object has invalid properties
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', bar: 'baz'})
//=> false
isDataDesc({value: 'foo', get: function(){}})
//=> false
isDataDesc({get: function(){}, value: 'foo'})
//=> falsefalse when a value is not the correct type
isDataDesc({value: 'foo', enumerable: 'foo'})
//=> false
isDataDesc({value: 'foo', configurable: 'foo'})
//=> false
isDataDesc({value: 'foo', writable: 'foo'})
//=> falseThe only valid data descriptor properties are the following:
configurable (required)enumerable (required)value (optional)writable (optional)To be a valid data descriptor, either value or writable must be defined.
Invalid properties
A descriptor may have additional invalid properties (an error will not be thrown).
var foo = {};
Object.defineProperty(foo, 'bar', {
enumerable: true,
whatever: 'blah', // invalid, but doesn't cause an error
get: function() {
return 'baz';
}
});
console.log(foo.bar);
//=> 'baz'Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 21 | jonschlinkert |
| 2 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 01, 2017.
TypeScript is a language for application-scale JavaScript. TypeScript adds optional types to JavaScript that support tools for large-scale JavaScript applications for any browser, for any host, on any OS. TypeScript compiles to readable, standards-based JavaScript. Try it out at the playground, and stay up to date via our blog and Twitter account.
Find others who are using TypeScript at our community page.
For the latest stable version:
For our nightly builds:
There are many ways to contribute to TypeScript. * Submit bugs and help us verify fixes as they are checked in. * Review the source code changes. * Engage with other TypeScript users and developers on StackOverflow. * Help each other in the TypeScript Community Discord. * Join the #typescript discussion on Twitter. * Contribute bug fixes. * Read the archived language specification (docx, pdf, md).
This project has adopted the Microsoft Open Source Code of Conduct. For more information see the Code of Conduct FAQ or contact opencode@microsoft.com with any additional questions or comments.
In order to build the TypeScript compiler, ensure that you have Git and Node.js installed.
Clone a copy of the repo:
Change to the TypeScript directory:
Install Gulp tools and dev dependencies:
Use one of the following to build and test:
gulp local # Build the compiler into built/local.
gulp clean # Delete the built compiler.
gulp LKG # Replace the last known good with the built one.
# Bootstrapping step to be executed when the built compiler reaches a stable state.
gulp tests # Build the test infrastructure using the built compiler.
gulp runtests # Run tests using the built compiler and test infrastructure.
# You can override the specific suite runner used or specify a test for this command.
# Use --tests=<testPath> for a specific test and/or --runner=<runnerName> for a specific suite.
# Valid runners include conformance, compiler, fourslash, project, user, and docker
# The user and docker runners are extended test suite runners - the user runner
# works on disk in the tests/cases/user directory, while the docker runner works in containers.
# You'll need to have the docker executable in your system path for the docker runner to work.
gulp runtests-parallel # Like runtests, but split across multiple threads. Uses a number of threads equal to the system
# core count by default. Use --workers=<number> to adjust this.
gulp baseline-accept # This replaces the baseline test results with the results obtained from gulp runtests.
gulp lint # Runs eslint on the TypeScript source.
gulp help # List the above commands.
For details on our planned features and future direction please refer to our roadmap.
Node.js Google Authentication Service Account Tokens
This is a low level utility library used to interact with Google Authentication services. In most cases, you probably want to use the google-auth-library instead.
.pem or .p12 key file:const { GoogleToken } = require('gtoken');
const gtoken = new GoogleToken({
keyFile: 'path/to/key.pem', // or path to .p12 key file
email: 'my_service_account_email@developer.gserviceaccount.com',
scope: ['https://scope1', 'https://scope2'] // or space-delimited string of scopes
});
gtoken.getToken((err, tokens) => {
if (err) {
console.log(err);
return;
}
console.log(tokens);
// {
// access_token: 'very-secret-token',
// expires_in: 3600,
// token_type: 'Bearer'
// }
});You can also use the async/await style API:
Or use promises:
.json key file:const { GoogleToken } = require('gtoken');
const gtoken = new GoogleToken({
keyFile: 'path/to/key.json',
scope: ['https://scope1', 'https://scope2'] // or space-delimited string of scopes
});
gtoken.getToken((err, tokens) => {
if (err) {
console.log(err);
return;
}
console.log(tokens);
});const key = '-----BEGIN RSA PRIVATE KEY-----\nXXXXXXXXXXX...';
const { GoogleToken } = require('gtoken');
const gtoken = new GoogleToken({
email: 'my_service_account_email@developer.gserviceaccount.com',
scope: ['https://scope1', 'https://scope2'], // or space-delimited string of scopes
key: key
});Various options that can be set when creating initializing the
gtokenobject.
options.email or options.iss: The service account email address.options.scope: An array of scope strings or space-delimited string of scopes.options.sub: The email address of the user requesting delegated access.options.keyFile: The filename of .json key, .pem key or .p12 key.options.key: The raw RSA private key value, in place of using options.keyFile.Returns the cached tokens or requests a new one and returns it.
gtoken.getToken((err, token) => {
console.log(err || token);
// gtoken.rawToken value is also set
});Given a keyfile, returns the key and (if available) the client email.
Various properties set on the gtoken object after call to
.getToken().
gtoken.idToken: The OIDC token returned (if any).gtoken.accessToken: The access token.gtoken.expiresAt: The expiry date as milliseconds since 1970/01/01gtoken.key: The raw key value.gtoken.rawToken: Most recent raw token data received from Google.Returns true if the token has expired, or token does not exist.
Revoke the token if set.
.p12 key from Google.p12 key and download it into your project..p12 key to a .pem keyYou can just specify your .p12 file (with .p12 extension) as the keyFile and it will automatically be converted to a .pem on the fly, however this results in a slight performance hit. If you’d like to convert to a .pem for use later, use OpenSSL if you have it installed.
Don’t forget, the passphrase when converting these files is the string 'notasecret'
A JSON Web Algorithms implementation focusing (exclusively, at this point) on the algorithms necessary for JSON Web Signatures.
This library supports all of the required, recommended and optional cryptographic algorithms for JWS:
| alg Parameter Value | Digital Signature or MAC Algorithm |
|---|---|
| HS256 | HMAC using SHA-256 hash algorithm |
| HS384 | HMAC using SHA-384 hash algorithm |
| HS512 | HMAC using SHA-512 hash algorithm |
| RS256 | RSASSA using SHA-256 hash algorithm |
| RS384 | RSASSA using SHA-384 hash algorithm |
| RS512 | RSASSA using SHA-512 hash algorithm |
| PS256 | RSASSA-PSS using SHA-256 hash algorithm |
| PS384 | RSASSA-PSS using SHA-384 hash algorithm |
| PS512 | RSASSA-PSS using SHA-512 hash algorithm |
| ES256 | ECDSA using P-256 curve and SHA-256 hash algorithm |
| ES384 | ECDSA using P-384 curve and SHA-384 hash algorithm |
| ES512 | ECDSA using P-521 curve and SHA-512 hash algorithm |
| none | No digital signature or MAC value included |
Please note that PS* only works on Node 6.12+ (excluding 7.x).
In order to run the tests, a recent version of OpenSSL is required. The version that comes with OS X (OpenSSL 0.9.8r 8 Feb 2011) is not recent enough, as it does not fully support ECDSA keys. You’ll need to use a version > 1.0.0; I tested with OpenSSL 1.0.1c 10 May 2012.
To run the tests, do
This will generate a bunch of keypairs to use in testing. If you want to generate new keypairs, do make clean before running npm test again.
I spawn openssl dgst -sign to test OpenSSL sign → JS verify and openssl dgst -verify to test JS sign → OpenSSL verify for each of the RSA and ECDSA algorithms.
Creates a new jwa object with sign and verify methods for the algorithm. Valid values for algorithm can be found in the table above ('HS256', 'HS384', etc) and are case-sensitive. Passing an invalid algorithm value will throw a TypeError.
Sign some input with either a secret for HMAC algorithms, or a private key for RSA and ECDSA algorithms.
If input is not already a string or buffer, JSON.stringify will be called on it to attempt to coerce it.
For the HMAC algorithm, secretOrPrivateKey should be a string or a buffer. For ECDSA and RSA, the value should be a string representing a PEM encoded private key.
Output base64url formatted. This is for convenience as JWS expects the signature in this format. If your application needs the output in a different format, please open an issue. In the meantime, you can use brianloveswords/base64url to decode the signature.
As of nodejs v0.11.8, SPKAC support was introduce. If your nodeJs version satisfies, then you can pass an object { key: '..', passphrase: '...' }
Verify a signature. Returns true or false.
signature should be a base64url encoded string.
For the HMAC algorithm, secretOrPublicKey should be a string or a buffer. For ECDSA and RSA, the value should be a string represented a PEM encoded public key.
HMAC
const jwa = require('jwa');
const hmac = jwa('HS256');
const input = 'super important stuff';
const secret = 'shhhhhh';
const signature = hmac.sign(input, secret);
hmac.verify(input, signature, secret) // === true
hmac.verify(input, signature, 'trickery!') // === falseWith keys
const fs = require('fs');
const jwa = require('jwa');
const privateKey = fs.readFileSync(__dirname + '/ecdsa-p521-private.pem');
const publicKey = fs.readFileSync(__dirname + '/ecdsa-p521-public.pem');
const ecdsa = jwa('ES512');
const input = 'very important stuff';
const signature = ecdsa.sign(input, privateKey);
ecdsa.verify(input, signature, publicKey) // === trueNormalize slashes in a file path to be posix/unix-like forward slashes. Also condenses repeat slashes to a single slash and removes and trailing slashes, unless disabled.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
const normalize = require('normalize-path');
console.log(normalize('\\foo\\bar\\baz\\'));
//=> '/foo/bar/baz'win32 namespaces
console.log(normalize('\\\\?\\UNC\\Server01\\user\\docs\\Letter.txt'));
//=> '//?/UNC/Server01/user/docs/Letter.txt'
console.log(normalize('\\\\.\\CdRomX'));
//=> '//./CdRomX'Consecutive slashes
Condenses multiple consecutive forward slashes (except for leading slashes in win32 namespaces) to a single slash.
By default trailing slashes are removed. Pass false as the last argument to disable this behavior and keep trailing slashes:
console.log(normalize('foo\\bar\\baz\\', false)); //=> 'foo/bar/baz/'
console.log(normalize('./foo/bar/baz/', false)); //=> './foo/bar/baz/'No breaking changes in this release.
path.parse() after a path has been normalized by this library.Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Other useful path-related libraries:
true if the path appears to be relative. | homepagepath.parse, parses a filepath into an object. | homepagetrue if a file path ends with the given string/suffix. | homepage| Commits | Contributor |
|---|---|
| 35 | jonschlinkert |
| 1 | phated |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on April 19, 2018. # word-wrap
Wrap words to a specified length.
Install with npm:
var wrap = require('word-wrap');
wrap('Lorem ipsum dolor sit amet, consectetur adipiscing elit, sed do eiusmod tempor incididunt ut labore et dolore magna aliqua. Ut enim ad minim veniam, quis nostrud exercitation ullamco laboris nisi ut aliquip ex ea commodo consequat.');Results in:
Lorem ipsum dolor sit amet, consectetur adipiscing
elit, sed do eiusmod tempor incididunt ut labore
et dolore magna aliqua. Ut enim ad minim veniam,
quis nostrud exercitation ullamco laboris nisi ut
aliquip ex ea commodo consequat.

Type: Number
Default: 50
The width of the text before wrapping to a new line.
Example:
Type: String
Default: `` (two spaces)
The string to use at the beginning of each line.
Example:
Type: String
Default: \n
The string to use at the end of each line.
Example:
Type: function
Default: function(str){return str;}
An escape function to run on each line after splitting them.
Example:
var xmlescape = require('xml-escape');
wrap(str, {
escape: function(string){
return xmlescape(string);
}
});Type: Boolean
Default: false
Trim trailing whitespace from the returned string. This option is included since .trim() would also strip the leading indentation from the first line.
Example:
Type: Boolean
Default: false
Break a word between any two letters when the word is longer than the specified width.
Example:
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 43 | jonschlinkert |
| 2 | lordvlad |
| 2 | hildjj |
| 1 | danilosampaio |
| 1 | 2fd |
| 1 | toddself |
| 1 | wolfgang42 |
| 1 | zachhale |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on June 02, 2017. # regexpp
A regular expression parser for ECMAScript.
import {
AST,
RegExpParser,
RegExpValidator,
RegExpVisitor,
parseRegExpLiteral,
validateRegExpLiteral,
visitRegExpAST
} from "regexpp"Parse a given regular expression literal then make AST object.
This is equivalent to new RegExpParser(options).parseLiteral(source).
source (string | RegExp) The source code to parse.options? (RegExpParser.Options) The options to parse.Validate a given regular expression literal.
This is equivalent to new RegExpValidator(options).validateLiteral(source).
source (string) The source code to validate.options? (RegExpValidator.Options) The options to validate.Visit each node of a given AST.
This is equivalent to new RegExpVisitor(handlers).visit(ast).
ast (AST.Node) The AST to visit.handlers (RegExpVisitor.Handlers) The callbacks.options? (RegExpParser.Options) The options to parse.Parse a regular expression literal.
source (string) The source code to parse. E.g. "/abc/g".start? (number) The start index in the source code. Default is 0.end? (number) The end index in the source code. Default is source.length.Parse a regular expression pattern.
source (string) The source code to parse. E.g. "abc".start? (number) The start index in the source code. Default is 0.end? (number) The end index in the source code. Default is source.length.uFlag? (boolean) The flag to enable Unicode mode.Parse a regular expression flags.
source (string) The source code to parse. E.g. "gim".start? (number) The start index in the source code. Default is 0.end? (number) The end index in the source code. Default is source.length.options (RegExpValidator.Options) The options to validate.Validate a regular expression literal.
source (string) The source code to validate.start? (number) The start index in the source code. Default is 0.end? (number) The end index in the source code. Default is source.length.Validate a regular expression pattern.
source (string) The source code to validate.start? (number) The start index in the source code. Default is 0.end? (number) The end index in the source code. Default is source.length.uFlag? (boolean) The flag to enable Unicode mode.Validate a regular expression flags.
source (string) The source code to validate.start? (number) The start index in the source code. Default is 0.end? (number) The end index in the source code. Default is source.length.handlers (RegExpVisitor.Handlers) The callbacks.Validate a regular expression literal.
ast (AST.Node) The AST to visit.Welcome contributing!
Please use GitHub’s Issues/PRs.
npm test runs tests and measures coverage.npm run build compiles TypeScript source code to index.js, index.js.map, and index.d.ts.npm run clean removes the temporary files which are created by npm test and npm run build.npm run lint runs ESLint.npm run update:test updates test fixtures.npm run update:ids updates src/unicode/ids.ts.npm run watch runs tests with --watch option.Create HTTP errors for Express, Koa, Connect, etc. with ease.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
var createError = require('http-errors')
var express = require('express')
var app = express()
app.use(function (req, res, next) {
if (!req.user) return next(createError(401, 'Please login to view this page.'))
next()
})expose - can be used to signal if message should be sent to the client, defaulting to false when status >= 500headers - can be an object of header names to values to be sent to the client, defaulting to undefined. When defined, the key names should all be lower-casedmessage - the traditional error message, which should be kept short and all single linestatus - the status code of the error, mirroring statusCode for general compatibilitystatusCode - the status code of the error, defaulting to 500Create a new error object with the given message msg. The error object inherits from createError.HttpError.
status: 500 - the status code as a numbermessage - the message of the error, defaulting to node’s text for that status code.properties - custom properties to attach to the objectExtend the given error object with createError.HttpError properties. This will not alter the inheritance of the given error object, and the modified error object is the return value.
fs.readFile('foo.txt', function (err, buf) {
if (err) {
if (err.code === 'ENOENT') {
var httpError = createError(404, err, { expose: false })
} else {
var httpError = createError(500, err)
}
}
})status - the status code as a numbererror - the error object to extendproperties - custom properties to attach to the objectCreate a new error object with the given message msg. The error object inherits from createError.HttpError.
code - the status code as a numbername - the name of the error as a “bumpy case”, i.e. NotFound or InternalServerError.| Status Code | Constructor Name |
|---|---|
| 400 | BadRequest |
| 401 | Unauthorized |
| 402 | PaymentRequired |
| 403 | Forbidden |
| 404 | NotFound |
| 405 | MethodNotAllowed |
| 406 | NotAcceptable |
| 407 | ProxyAuthenticationRequired |
| 408 | RequestTimeout |
| 409 | Conflict |
| 410 | Gone |
| 411 | LengthRequired |
| 412 | PreconditionFailed |
| 413 | PayloadTooLarge |
| 414 | URITooLong |
| 415 | UnsupportedMediaType |
| 416 | RangeNotSatisfiable |
| 417 | ExpectationFailed |
| 418 | ImATeapot |
| 421 | MisdirectedRequest |
| 422 | UnprocessableEntity |
| 423 | Locked |
| 424 | FailedDependency |
| 425 | UnorderedCollection |
| 426 | UpgradeRequired |
| 428 | PreconditionRequired |
| 429 | TooManyRequests |
| 431 | RequestHeaderFieldsTooLarge |
| 451 | UnavailableForLegalReasons |
| 500 | InternalServerError |
| 501 | NotImplemented |
| 502 | BadGateway |
| 503 | ServiceUnavailable |
| 504 | GatewayTimeout |
| 505 | |
| 506 | VariantAlsoNegotiates |
| 507 | InsufficientStorage |
| 508 | LoopDetected |
| 509 | BandwidthLimitExceeded |
| 510 | NotExtended |
| 511 | NetworkAuthenticationRequired |
Fast, in memory work queue.
Benchmarks (1 million tasks):
Obtained on node 12.16.1, on a dedicated server.
If you need zero-overhead series function call, check out fastseries. For zero-overhead parallel function call, check out fastparallel.
npm i fastq --save
'use strict'
var queue = require('fastq')(worker, 1)
queue.push(42, function (err, result) {
if (err) { throw err }
console.log('the result is', result)
})
function worker (arg, cb) {
cb(null, 42 * 2)
}'use strict'
var that = { hello: 'world' }
var queue = require('fastq')(that, worker, 1)
queue.push(42, function (err, result) {
if (err) { throw err }
console.log(this)
console.log('the result is', result)
})
function worker (arg, cb) {
console.log(this)
cb(null, 42 * 2)
}fastqueue()queue#push()queue#unshift()queue#pause()queue#resume()queue#idle()queue#length()queue#getQueue()queue#kill()queue#killAndDrain()queue#error()queue#concurrencyqueue#drainqueue#emptyqueue#saturatedAdd a task at the end of the queue. done(err, result) will be called when the task was processed.
| ### queue.unshift(task, done) |
Add a task at the beginning of the queue. done(err, result) will be called when the task was processed. |
Pause the processing of tasks. Currently worked tasks are not stopped.
| ### queue.resume() |
| Resume the processing of tasks. |
Returns false if there are tasks being processed or waiting to be processed. true otherwise.
| ### queue.length() |
| Returns the number of tasks waiting to be processed (in the queue). |
Returns all the tasks be processed (in the queue). Returns empty array when there are no tasks
| ### queue.kill() |
Removes all tasks waiting to be processed, and reset drain to an empty function. |
Same than kill but the drain function will be called before reset to empty.
| ### queue.error(handler) |
Set a global error handler. handler(err, task) will be called when any of the tasks return an error. |
Property that returns the number of concurrent tasks that could be executed in parallel. It can be altered at runtime.
| ### queue.drain |
| Function that will be called when the last item from the queue has been processed by a worker. It can be altered at runtime. |
Function that will be called when the last item from the queue has been assigned to a worker. It can be altered at runtime.
Function that will be called when the queue hits the concurrency limit. It can be altered at runtime.
ISC
Yargs be a node.js library fer hearties tryin’ ter parse optstrings
Yargs helps you build interactive command line tools, by parsing arguments and generating an elegant user interface.
It gives you:
my-program.js serve --port=5000).mocha [spec..]
Run tests with Mocha
Commands
mocha inspect [spec..] Run tests with Mocha [default]
mocha init <path> create a client-side Mocha setup at <path>
Rules & Behavior
--allow-uncaught Allow uncaught errors to propagate [boolean]
--async-only, -A Require all tests to use a callback (async) or
return a Promise [boolean]
Stable version:
Bleeding edge version with the most recent features:
#!/usr/bin/env node
const yargs = require('yargs/yargs')
const { hideBin } = require('yargs/helpers')
const argv = yargs(hideBin(process.argv)).argv
if (argv.ships > 3 && argv.distance < 53.5) {
console.log('Plunder more riffiwobbles!')
} else {
console.log('Retreat from the xupptumblers!')
}$ ./plunder.js --ships=4 --distance=22
Plunder more riffiwobbles!
$ ./plunder.js --ships 12 --distance 98.7
Retreat from the xupptumblers!#!/usr/bin/env node
const yargs = require('yargs/yargs')
const { hideBin } = require('yargs/helpers')
yargs(hideBin(process.argv))
.command('serve [port]', 'start the server', (yargs) => {
yargs
.positional('port', {
describe: 'port to bind on',
default: 5000
})
}, (argv) => {
if (argv.verbose) console.info(`start server on :${argv.port}`)
serve(argv.port)
})
.option('verbose', {
alias: 'v',
type: 'boolean',
description: 'Run with verbose logging'
})
.argvRun the example above with --help to see the help for the application.
yargs has type definitions at [@types/yargs]type-definitions.
npm i @types/yargs --save-dev
See usage examples in docs.
As of v16, yargs supports Deno:
import yargs from 'https://deno.land/x/yargs/deno.ts'
import { Arguments } from 'https://deno.land/x/yargs/deno-types.ts'
yargs(Deno.args)
.command('download <files...>', 'download a list of files', (yargs: any) => {
return yargs.positional('files', {
describe: 'a list of files to do something with'
})
}, (argv: Arguments) => {
console.info(argv)
})
.strictCommands()
.demandCommand(1)
.argvAs of v16,yargs supports ESM imports:
import yargs from 'yargs'
import { hideBin } from 'yargs/helpers'
yargs(hideBin(process.argv))
.command('curl <url>', 'fetch the contents of the URL', () => {}, (argv) => {
console.info(argv)
})
.demandCommand(1)
.argvSee examples of using yargs in the browser in docs.
Having problems? want to contribute? join our community slack.
Libraries in this ecosystem make a best effort to track Node.js’ release schedule. Here’s a post on why we think this is important.
A dictionary of file extensions and associated module loaders.
This is used by Liftoff to automatically require dependencies for configuration files, and by rechoir for registering module loaders.
Map file types to modules which provide a require.extensions loader.
{
'.babel.js': [
{
module: '@babel/register',
register: function(hook) {
// register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353
// which only captures the final extension (.babel.js -> .js)
hook({ extensions: '.js' });
},
},
{
module: 'babel-register',
register: function(hook) {
hook({ extensions: '.js' });
},
},
{
module: 'babel-core/register',
register: function(hook) {
hook({ extensions: '.js' });
},
},
{
module: 'babel/register',
register: function(hook) {
hook({ extensions: '.js' });
},
},
],
'.babel.ts': [
{
module: '@babel/register',
register: function(hook) {
hook({ extensions: '.ts' });
},
},
],
'.buble.js': 'buble/register',
'.cirru': 'cirru-script/lib/register',
'.cjsx': 'node-cjsx/register',
'.co': 'coco',
'.coffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'],
'.coffee.md': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'],
'.csv': 'require-csv',
'.eg': 'earlgrey/register',
'.esm.js': {
module: 'esm',
register: function(hook) {
// register on .js extension due to https://github.com/joyent/node/blob/v0.12.0/lib/module.js#L353
// which only captures the final extension (.babel.js -> .js)
var esmLoader = hook(module);
require.extensions['.js'] = esmLoader('module')._extensions['.js'];
},
},
'.iced': ['iced-coffee-script/register', 'iced-coffee-script'],
'.iced.md': 'iced-coffee-script/register',
'.ini': 'require-ini',
'.js': null,
'.json': null,
'.json5': 'json5/lib/require',
'.jsx': [
{
module: '@babel/register',
register: function(hook) {
hook({ extensions: '.jsx' });
},
},
{
module: 'babel-register',
register: function(hook) {
hook({ extensions: '.jsx' });
},
},
{
module: 'babel-core/register',
register: function(hook) {
hook({ extensions: '.jsx' });
},
},
{
module: 'babel/register',
register: function(hook) {
hook({ extensions: '.jsx' });
},
},
{
module: 'node-jsx',
register: function(hook) {
hook.install({ extension: '.jsx', harmony: true });
},
},
],
'.litcoffee': ['coffeescript/register', 'coffee-script/register', 'coffeescript', 'coffee-script'],
'.liticed': 'iced-coffee-script/register',
'.ls': ['livescript', 'LiveScript'],
'.mjs': '/absolute/path/to/interpret/mjs-stub.js',
'.node': null,
'.toml': {
module: 'toml-require',
register: function(hook) {
hook.install();
},
},
'.ts': [
'ts-node/register',
'typescript-node/register',
'typescript-register',
'typescript-require',
'sucrase/register/ts',
{
module: '@babel/register',
register: function(hook) {
hook({ extensions: '.ts' });
},
},
],
'.tsx': [
'ts-node/register',
'typescript-node/register',
'sucrase/register',
{
module: '@babel/register',
register: function(hook) {
hook({ extensions: '.tsx' });
},
},
],
'.wisp': 'wisp/engine/node',
'.xml': 'require-xml',
'.yaml': 'require-yaml',
'.yml': 'require-yaml',
}Same as above, but only include the extensions which are javascript variants.
Consumers should use the exported extensions or jsVariants object to determine which module should be loaded for a given extension. If a matching extension is found, consumers should do the following:
If the value is null, do nothing.
If the value is a string, try to require it.
If the value is an object, try to require the module property. If successful, the register property (a function) should be called with the module passed as the first argument.
If the value is an array, iterate over it, attempting step #2 or #3 until one of the attempts does not throw.
A simple utility for promisifying functions and classes.
A comprehensive list of changes in each version may be found in the CHANGELOG.
Read more about the client libraries for Cloud APIs, including the older Google APIs Client Libraries, in Client Libraries Explained.
Table of contents:
const {promisify} = require('@google-cloud/promisify');
/**
* This is a very basic example function that accepts a callback.
*/
function someCallbackFunction(name, callback) {
if (!name) {
callback(new Error('Name is required!'));
} else {
callback(null, `Well hello there, ${name}!`);
}
}
// let's promisify it!
const somePromiseFunction = promisify(someCallbackFunction);
async function quickstart() {
// now we can just `await` the function to use it like a promisified method
const [result] = await somePromiseFunction('nodestronaut');
console.log(result);
}
quickstart();It’s unlikely you will need to install this package directly, as it will be installed as a dependency when you install other @google-cloud packages.
Samples are in the samples/ directory. The samples’ README.md has instructions for running the samples.
| Sample | Source Code | Try it |
|---|---|---|
| Quickstart | source code | ![]() |
The Google Cloud Common Promisify Node.js Client API Reference documentation also contains samples.
Our client libraries follow the Node.js release schedule. Libraries are compatible with all current active and maintenance versions of Node.js.
Client libraries targetting some end-of-life versions of Node.js are available, and can be installed via npm dist-tags. The dist-tags follow the naming convention legacy-(version).
Legacy Node.js versions are supported as a best effort:
legacy-8: install client libraries from this dist-tag for versions compatible with Node.js 8.This library follows Semantic Versioning.
This library is considered to be General Availability (GA). This means it is stable; the code surface will not change in backwards-incompatible ways unless absolutely necessary (e.g. because of critical security issues) or with an extensive deprecation period. Issues and requests against GA libraries are addressed with the highest priority.
More Information: Google Cloud Platform Launch Stages
Contributions welcome! See the Contributing Guide.
Please note that this README.md, the samples/README.md, and a variety of configuration files in this repository (including .nycrc and tsconfig.json) are generated from a central template. To edit one of these files, make an edit to its template in this directory.
Apache Version 2.0
See LICENSE
Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.
Install with npm:
var isDescriptor = require('is-descriptor');
isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> falseYou may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.
var obj = {};
obj.foo = 'abc';
Object.defineProperty(obj, 'bar', {
value: 'xyz'
});
isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> truefalse when not an object
true when the object has valid properties with valid values.
false when the object has invalid properties
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> falsefalse when a value is not the correct type
isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> falsetrue when the object has valid properties with valid values.
isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> truefalse when the object has invalid properties
isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> falsefalse when an accessor is not a function
isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> falsefalse when a value is not the correct type
isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 24 | jonschlinkert |
| 1 | doowb |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor
Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.
Install with npm:
var isDescriptor = require('is-descriptor');
isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> falseYou may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.
var obj = {};
obj.foo = 'abc';
Object.defineProperty(obj, 'bar', {
value: 'xyz'
});
isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> truefalse when not an object
true when the object has valid properties with valid values.
false when the object has invalid properties
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> falsefalse when a value is not the correct type
isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> falsetrue when the object has valid properties with valid values.
isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> truefalse when the object has invalid properties
isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> falsefalse when an accessor is not a function
isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> falsefalse when a value is not the correct type
isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 24 | jonschlinkert |
| 1 | doowb |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor
Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.
Install with npm:
var isDescriptor = require('is-descriptor');
isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> falseYou may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.
var obj = {};
obj.foo = 'abc';
Object.defineProperty(obj, 'bar', {
value: 'xyz'
});
isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> truefalse when not an object
true when the object has valid properties with valid values.
false when the object has invalid properties
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> falsefalse when a value is not the correct type
isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> falsetrue when the object has valid properties with valid values.
isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> truefalse when the object has invalid properties
isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> falsefalse when an accessor is not a function
isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> falsefalse when a value is not the correct type
isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 24 | jonschlinkert |
| 1 | doowb |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor
Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.
Install with npm:
var isDescriptor = require('is-descriptor');
isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> falseYou may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.
var obj = {};
obj.foo = 'abc';
Object.defineProperty(obj, 'bar', {
value: 'xyz'
});
isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> truefalse when not an object
true when the object has valid properties with valid values.
false when the object has invalid properties
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> falsefalse when a value is not the correct type
isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> falsetrue when the object has valid properties with valid values.
isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> truefalse when the object has invalid properties
isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> falsefalse when an accessor is not a function
isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> falsefalse when a value is not the correct type
isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 24 | jonschlinkert |
| 1 | doowb |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # is-descriptor
Returns true if a value has the characteristics of a valid JavaScript descriptor. Works for data descriptors and accessor descriptors.
Install with npm:
var isDescriptor = require('is-descriptor');
isDescriptor({value: 'foo'})
//=> true
isDescriptor({get: function(){}, set: function(){}})
//=> true
isDescriptor({get: 'foo', set: function(){}})
//=> falseYou may also check for a descriptor by passing an object as the first argument and property name (string) as the second argument.
var obj = {};
obj.foo = 'abc';
Object.defineProperty(obj, 'bar', {
value: 'xyz'
});
isDescriptor(obj, 'foo');
//=> true
isDescriptor(obj, 'bar');
//=> truefalse when not an object
true when the object has valid properties with valid values.
false when the object has invalid properties
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', bar: 'baz'});
//=> false
isDescriptor({value: 'foo', get: noop});
//=> false
isDescriptor({get: noop, value: noop});
//=> falsefalse when a value is not the correct type
isDescriptor({value: 'foo', enumerable: 'foo'});
//=> false
isDescriptor({value: 'foo', configurable: 'foo'});
//=> false
isDescriptor({value: 'foo', writable: 'foo'});
//=> falsetrue when the object has valid properties with valid values.
isDescriptor({get: noop, set: noop});
//=> true
isDescriptor({get: noop});
//=> true
isDescriptor({set: noop});
//=> truefalse when the object has invalid properties
isDescriptor({get: noop, set: noop, bar: 'baz'});
//=> false
isDescriptor({get: noop, writable: true});
//=> false
isDescriptor({get: noop, value: true});
//=> falsefalse when an accessor is not a function
isDescriptor({get: noop, set: 'baz'});
//=> false
isDescriptor({get: 'foo', set: noop});
//=> false
isDescriptor({get: 'foo', bar: 'baz'});
//=> false
isDescriptor({get: 'foo', set: 'baz'});
//=> falsefalse when a value is not the correct type
isDescriptor({get: noop, set: noop, enumerable: 'foo'});
//=> false
isDescriptor({set: noop, configurable: 'foo'});
//=> false
isDescriptor({get: noop, configurable: 'foo'});
//=> falsePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 24 | jonschlinkert |
| 1 | doowb |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # rc
The non-configurable configuration loader for lazy people.
The only option is to pass rc the name of your app, and your default configuration.
var conf = require('rc')(appname, {
//defaults go here.
port: 2468,
//defaults which are objects will be merged, not replaced
views: {
engine: 'jade'
}
});rc will return your configuration options merged with the defaults you specify. If you pass in a predefined defaults object, it will be mutated:
If rc finds any config files for your app, the returned config object will have a configs array containing their paths:
var appCfg = require('rc')(appname, conf);
appCfg.configs[0] // /etc/appnamerc
appCfg.configs[1] // /home/dominictarr/.config/appname
appCfg.config // same as appCfg.configs[appCfg.configs.length - 1]Given your application name (appname), rc will look in all the obvious places for configuration.
--foo baz, also nested: --foo.bar=baz)
${appname}_
appname_foo__bar__baz => foo.bar.baz)--config file then from that file.${appname}rc or the first found looking in ./ ../ ../../ ../../../ etc.$HOME/.${appname}rc$HOME/.${appname}/config$HOME/.config/${appname}$HOME/.config/${appname}/config/etc/${appname}rc/etc/${appname}/configAll configuration sources that were found will be flattened into one object, so that sources earlier in this list override later ones.
Configuration files (e.g. .appnamerc) may be in either json or ini format. No file extension (.json or .ini) should be used. The example configurations below are equivalent:
ini; You can include comments in `ini` format if you want.
dependsOn=0.10.0
; `rc` has built-in support for ini sections, see?
[commands]
www = ./commands/www
console = ./commands/repl
; You can even do nested sections
[generators.options]
engine = ejs
[generators.modules]
new = generate-new
engine = generate-backend
json{
// You can even comment your JSON, if you want
"dependsOn": "0.10.0",
"commands": {
"www": "./commands/www",
"console": "./commands/repl"
},
"generators": {
"options": {
"engine": "ejs"
},
"modules": {
"new": "generate-new",
"backend": "generate-backend"
}
}
}Comments are stripped from JSON config via strip-json-comments.
Since ini, and env variables do not have a standard for types, your application needs be prepared for strings.
To ensure that string representations of booleans and numbers are always converted into their proper types (especially useful if you intend to do strict === comparisons), consider using a module such as parse-strings-in-object to wrap the config object returned from rc.
Assume you have an application like this (notice the hard-coded defaults passed to rc):
const conf = require('rc')('myapp', {
port: 12345,
mode: 'test'
});
console.log(JSON.stringify(conf, null, 2));
You also have a file config.json, with these contents:
{
"port": 9000,
"foo": "from config json",
"something": "else"
}
And a file .myapprc in the same folder, with these contents:
{
"port": "3001",
"foo": "bar"
}
Here is the expected output from various commands:
node .
{
"port": "3001",
"mode": "test",
"foo": "bar",
"_": [],
"configs": [
"/Users/stephen/repos/conftest/.myapprc"
],
"config": "/Users/stephen/repos/conftest/.myapprc"
}
Default mode from hard-coded object is retained, but port is overridden by .myapprc file (automatically found based on appname match), and foo is added.
node . --foo baz
{
"port": "3001",
"mode": "test",
"foo": "baz",
"_": [],
"configs": [
"/Users/stephen/repos/conftest/.myapprc"
],
"config": "/Users/stephen/repos/conftest/.myapprc"
}
Same result as above but foo is overridden because command-line arguments take precedence over .myapprc file.
node . --foo barbar --config config.json
{
"port": 9000,
"mode": "test",
"foo": "barbar",
"something": "else",
"_": [],
"config": "config.json",
"configs": [
"/Users/stephen/repos/conftest/.myapprc",
"config.json"
]
}
Now the port comes from the config.json file specified (overriding the value from .myapprc), and foo value is overriden by command-line despite also being specified in the config.json file.
argvYou may pass in your own argv as the third argument to rc. This is in case you want to use your own command-line opts parser.
If you have a special need to use a non-standard parser, you can do so by passing in the parser as the 4th argument. (leave the 3rd as null to get the default args parser)
This may also be used to force a more strict format, such as strict, valid JSON only.
rc is running fs.statSync– so make sure you don’t use it in a hot code path (e.g. a request handler)
A cache object that deletes the least-recently-used items.
var LRU = require("lru-cache")
, options = { max: 500
, length: function (n, key) { return n * 2 + key.length }
, dispose: function (key, n) { n.close() }
, maxAge: 1000 * 60 * 60 }
, cache = new LRU(options)
, otherCache = new LRU(50) // sets just the max size
cache.set("key", "value")
cache.get("key") // "value"
// non-string keys ARE fully supported
// but note that it must be THE SAME object, not
// just a JSON-equivalent object.
var someObject = { a: 1 }
cache.set(someObject, 'a value')
// Object keys are not toString()-ed
cache.set('[object Object]', 'a different value')
assert.equal(cache.get(someObject), 'a value')
// A similar object with same keys/values won't work,
// because it's a different object identity
assert.equal(cache.get({ a: 1 }), undefined)
cache.reset() // empty the cacheIf you put more stuff in it, then items will fall out.
If you try to put an oversized thing in it, then it’ll fall out right away.
max The maximum size of the cache, checked by applying the length function to all values in the cache. Not setting this is kind of silly, since that’s the whole purpose of this lib, but it defaults to Infinity. Setting it to a non-number or negative number will throw a TypeError. Setting it to 0 makes it be Infinity.maxAge Maximum age in ms. Items are not pro-actively pruned out as they age, but if you try to get an item that is too old, it’ll drop it and return undefined instead of giving it to you. Setting this to a negative value will make everything seem old! Setting it to a non-number will throw a TypeError.length Function that is used to calculate the length of stored items. If you’re storing strings or buffers, then you probably want to do something like function(n, key){return n.length}. The default is function(){return 1}, which is fine if you want to store max like-sized things. The item is passed as the first argument, and the key is passed as the second argumnet.dispose Function that is called on items when they are dropped from the cache. This can be handy if you want to close file descriptors or do other cleanup tasks when items are no longer accessible. Called with key, value. It’s called before actually removing the item from the internal cache, so if you want to immediately put it back in, you’ll have to do that in a nextTick or setTimeout callback or it won’t do anything.stale By default, if you set a maxAge, it’ll only actually pull stale items out of the cache when you get(key). (That is, it’s not pre-emptively doing a setTimeout or anything.) If you set stale:true, it’ll return the stale value before deleting it. If you don’t set this, then it’ll return undefined when you try to get a stale entry, as if it had already been deleted.noDisposeOnSet By default, if you set a dispose() method, then it’ll be called whenever a set() operation overwrites an existing key. If you set this option, dispose() will only be called when a key falls out of the cache, not when it is overwritten.updateAgeOnGet When using time-expiring entries with maxAge, setting this to true will make each item’s effective time update to the current time whenever it is retrieved from cache, causing it to not expire. (It can still fall out of cache based on recency of use, of course.)set(key, value, maxAge)get(key) => value
Both of these will update the “recently used”-ness of the key. They do what you think. maxAge is optional and overrides the cache maxAge option if provided.
If the key is not found, get() will return undefined.
The key and val can be any value.
peek(key)
Returns the key value (or undefined if not found) without updating the “recently used”-ness of the key.
(If you find yourself using this a lot, you might be using the wrong sort of data structure, but there are some use cases where it’s handy.)
del(key)
Deletes a key out of the cache.
reset()
Clear the cache entirely, throwing away all values.
has(key)
Check if a key is in the cache, without updating the recent-ness or deleting it for being stale.
forEach(function(value,key,cache), [thisp])
Just like Array.prototype.forEach. Iterates over all the keys in the cache, in order of recent-ness. (Ie, more recently used items are iterated over first.)
rforEach(function(value,key,cache), [thisp])
The same as cache.forEach(...) but items are iterated over in reverse order. (ie, less recently used items are iterated over first.)
keys()
Return an array of the keys in the cache.
values()
Return an array of the values in the cache.
length
Return total length of objects in cache taking into account length options function.
itemCount
Return total quantity of objects currently in cache. Note, that stale (see options) items are returned as part of this item count.
dump()
Return an array of the cache entries ready for serialization and usage with ’destinationCache.load(arr)`.
load(cacheEntriesArray)
Loads another cache entries array, obtained with sourceCache.dump(), into the cache. The destination cache is reset before loading new entries
prune()
Manually iterates over the entire cache proactively pruning old entries
A robust Punycode converter that fully complies to RFC 3492 and RFC 5891, and works on nearly all JavaScript platforms.
This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:
punycode.c by Markus W. Scherer (IBM)punycode.c by Ben Noordhuispunycode.js by Ben Noordhuis (note: not fully compliant)This project is bundled with Node.js v0.6.2+.
Via npm (only required for Node.js releases older than v0.6.2):
Via Bower:
Via Component:
In a browser:
In Narwhal, Node.js, and RingoJS:
In Rhino:
Using an AMD loader like RequireJS:
require(
{
'paths': {
'punycode': 'path/to/punycode'
}
},
['punycode'],
function(punycode) {
console.log(punycode);
}
);punycode.decode(string)Converts a Punycode string of ASCII symbols to a string of Unicode symbols.
// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'punycode.encode(string)Converts a string of Unicode symbols to a Punycode string of ASCII symbols.
// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'punycode.toUnicode(input)Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode.
// decode domain names
punycode.toUnicode('xn--maana-pta.com');
// → 'mañana.com'
punycode.toUnicode('xn----dqo34k.com');
// → '☃-⌘.com'
// decode email addresses
punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq');
// → 'джумла@джpумлатест.bрфa'punycode.toASCII(input)Converts a Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII.
// encode domain names
punycode.toASCII('mañana.com');
// → 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com');
// → 'xn----dqo34k.com'
// encode email addresses
punycode.toASCII('джумла@джpумлатест.bрфa');
// → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'punycode.ucs2punycode.ucs2.decode(string)Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.
punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]punycode.ucs2.encode(codePoints)Creates a string based on an array of numeric code point values.
punycode.ucs2.encode([0x61, 0x62, 0x63]);
// → 'abc'
punycode.ucs2.encode([0x1D306]);
// → '\uD834\uDF06'punycode.versionA string representing the current Punycode.js version number.
After cloning this repository, run npm install --dev to install the dependencies needed for Punycode.js development and testing. You may want to install Istanbul globally using npm install istanbul -g.
Once that’s done, you can run the unit tests in Node using npm test or node tests/tests.js. To run the tests in Rhino, Ringo, Narwhal, PhantomJS, and web browsers as well, use grunt test.
To generate the code coverage report, use grunt cover.
Feel free to fork if you see possible improvements!
| Mathias Bynens |
| John-David Dalton |
Simplify eslint rules by visiting templates
npm install eslint-template-visitor
# or
yarn add eslint-template-visitor
+const eslintTemplateVisitor = require('eslint-template-visitor');
+
+const templates = eslintTemplateVisitor();
+
+const objectVariable = templates.variable();
+const argumentsVariable = templates.spreadVariable();
+
+const substrCallTemplate = templates.template`${objectVariable}.substr(${argumentsVariable})`;
const create = context => {
const sourceCode = context.getSourceCode();
- return {
- CallExpression(node) {
- if (node.callee.type !== 'MemberExpression'
- || node.callee.property.type !== 'Identifier'
- || node.callee.property.name !== 'substr'
- ) {
- return;
- }
-
- const objectNode = node.callee.object;
+ return templates.visitor({
+ [substrCallTemplate](node) {
+ const objectNode = substrCallTemplate.context.getMatch(objectVariable);
+ const argumentNodes = substrCallTemplate.context.getMatch(argumentsVariable);
const problem = {
node,
message: 'Prefer `String#slice()` over `String#substr()`.',
};
- const canFix = node.arguments.length === 0;
+ const canFix = argumentNodes.length === 0;
if (canFix) {
problem.fix = fixer => fixer.replaceText(node, sourceCode.getText(objectNode) + '.slice()');
}
context.report(problem);
},
- };
+ });
};See examples for more.
eslintTemplateVisitor(options?)Craete a template visitor.
Example:
const eslintTemplateVisitor = require('eslint-template-visitor');
const templates = eslintTemplateVisitor();optionsType: object
parserOptionsOptions for the template parser. Passed down to babel-eslint.
Example:
templates.variable()Create a variable to be used in a template. Such a variable can match exactly one AST node.
templates.spreadVariable()Create a spread variable. Spread variable can match an array of AST nodes.
This is useful for matching a number of arguments in a call or a number of statements in a block.
templates.variableDeclarationVariable()Create a variable declaration variable. Variable declaration variable can match any type of variable declaration node.
This is useful for matching any variable declaration, be it const, let or var.
Use it in place of a variable declaration keyword:
const variableDeclarationVariable = templates.variableDeclarationVariable();
const template = templates.template`() => {
${variableDeclarationVariable} x = y;
}`;templates.template tagCreates a template possibly containing variables.
Example:
const objectVariable = templates.variable();
const argumentsVariable = templates.spreadVariable();
const substrCallTemplate = templates.template`${objectVariable}.substr(${argumentsVariable})`;
const create = () => templates.visitor({
[substrCallTemplate](node) {
// `node` here is the matching `.substr` call (i.e. `CallExpression`)
}
});templates.visitor({ /* visitors */ })Used to merge template visitors with common ESLint visitors.
Example:
const create = () => templates.visitor({
[substrCallTemplate](node) {
// Template visitor
},
FunctionDeclaration(node) {
// Simple node type visitor
},
'IfStatement > BlockStatement'(node) {
// ESLint selector visitor
},
});template.contextA template match context. This property is defined only within a visitor call (in other words, only when working on a matching node).
Example:
const create = () => templates.visitor({
[substrCallTemplate](node) {
// `substrCallTemplate.context` can be used here
},
FunctionDeclaration(node) {
// `substrCallTemplate.context` is not defined here, and it does not make sense to use it here,
// since we `substrCallTemplate` did not match an AST node.
},
});template.context.getMatch(variable)Used to get a match for a variable.
Example:
const objectVariable = templates.variable();
const argumentsVariable = templates.spreadVariable();
const substrCallTemplate = templates.template`${objectVariable}.substr(${argumentsVariable})`;
const create = () => templates.visitor({
[substrCallTemplate](node) {
const objectNode = substrCallTemplate.context.getMatch(objectVariable);
// For example, let's check if `objectNode` is an `Identifier`: `objectNode.type === 'Identifier'`
const argumentNodes = substrCallTemplate.context.getMatch(argumentsVariable);
// `Array.isArray(argumentNodes) === true`
},
});template.narrow(selector, targetMatchIndex = 0)Narrow the template to a part of the AST matching the selector.
Sometimes you can not define a wanted template at the top level due to JS syntax limitations. For example, you can’t have await or yield at the top level of a script.
Use a wrapper function in the template and then narrow it to a wanted AST node:
const template = templates.template`
async () => { await 1; }
`.narrow('BlockStatement > :has(AwaitExpression)');The template above is equivalent to this:
Except the latter can not be defined directly due to espree limitations.
zlib port to javascript, very fast!
Why pako is cool:
This project was done to understand how fast JS can be and is it necessary to develop native C modules for CPU-intensive tasks. Enjoy the result!
Famous projects, using pako:
Benchmarks:
node v0.10.26, 1mb sample:
deflate-dankogai x 4.73 ops/sec ±0.82% (15 runs sampled)
deflate-gildas x 4.58 ops/sec ±2.33% (15 runs sampled)
deflate-imaya x 3.22 ops/sec ±3.95% (12 runs sampled)
! deflate-pako x 6.99 ops/sec ±0.51% (21 runs sampled)
deflate-pako-string x 5.89 ops/sec ±0.77% (18 runs sampled)
deflate-pako-untyped x 4.39 ops/sec ±1.58% (14 runs sampled)
* deflate-zlib x 14.71 ops/sec ±4.23% (59 runs sampled)
inflate-dankogai x 32.16 ops/sec ±0.13% (56 runs sampled)
inflate-imaya x 30.35 ops/sec ±0.92% (53 runs sampled)
! inflate-pako x 69.89 ops/sec ±1.46% (71 runs sampled)
inflate-pako-string x 19.22 ops/sec ±1.86% (49 runs sampled)
inflate-pako-untyped x 17.19 ops/sec ±0.85% (32 runs sampled)
* inflate-zlib x 70.03 ops/sec ±1.64% (81 runs sampled)
node v0.11.12, 1mb sample:
deflate-dankogai x 5.60 ops/sec ±0.49% (17 runs sampled)
deflate-gildas x 5.06 ops/sec ±6.00% (16 runs sampled)
deflate-imaya x 3.52 ops/sec ±3.71% (13 runs sampled)
! deflate-pako x 11.52 ops/sec ±0.22% (32 runs sampled)
deflate-pako-string x 9.53 ops/sec ±1.12% (27 runs sampled)
deflate-pako-untyped x 5.44 ops/sec ±0.72% (17 runs sampled)
* deflate-zlib x 14.05 ops/sec ±3.34% (63 runs sampled)
inflate-dankogai x 42.19 ops/sec ±0.09% (56 runs sampled)
inflate-imaya x 79.68 ops/sec ±1.07% (68 runs sampled)
! inflate-pako x 97.52 ops/sec ±0.83% (80 runs sampled)
inflate-pako-string x 45.19 ops/sec ±1.69% (57 runs sampled)
inflate-pako-untyped x 24.35 ops/sec ±2.59% (40 runs sampled)
* inflate-zlib x 60.32 ops/sec ±1.36% (69 runs sampled)
zlib’s test is partially affected by marshalling (that make sense for inflate only). You can change deflate level to 0 in benchmark source, to investigate details. For deflate level 6 results can be considered as correct.
Install:
node.js:
npm install pako
browser:
bower install pako
Full docs - http://nodeca.github.io/pako/
var pako = require('pako');
// Deflate
//
var input = new Uint8Array();
//... fill input data here
var output = pako.deflate(input);
// Inflate (simple wrapper can throw exception on broken stream)
//
var compressed = new Uint8Array();
//... fill data to uncompress here
try {
var result = pako.inflate(compressed);
} catch (err) {
console.log(err);
}
//
// Alternate interface for chunking & without exceptions
//
var inflator = new pako.Inflate();
inflator.push(chunk1, false);
inflator.push(chunk2, false);
...
inflator.push(chunkN, true); // true -> last chunk
if (inflator.err) {
console.log(inflator.msg);
}
var output = inflator.result;Sometime you can wish to work with strings. For example, to send big objects as json to server. Pako detects input data type. You can force output to be string with option { to: 'string' }.
var pako = require('pako');
var test = { my: 'super', puper: [456, 567], awesome: 'pako' };
var binaryString = pako.deflate(JSON.stringify(test), { to: 'string' });
//
// Here you can do base64 encode, make xhr requests and so on.
//
var restored = JSON.parse(pako.inflate(binaryString, { to: 'string' }));Pako does not contain some specific zlib functions:
deflateCopy, deflateBound, deflateParams, deflatePending, deflatePrime, deflateTune.inflateCopy, inflateMark, inflatePrime, inflateGetDictionary, inflateSync, inflateSyncPoint, inflateUndermine.Available as part of the Tidelift Subscription
The maintainers of pako and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.
Personal thanks to:
Original implementation (in C):
/lib/zlib contentA robust Punycode converter that fully complies to RFC 3492 and RFC 5891, and works on nearly all JavaScript platforms.
This JavaScript library is the result of comparing, optimizing and documenting different open-source implementations of the Punycode algorithm:
punycode.c by Markus W. Scherer (IBM)punycode.c by Ben Noordhuispunycode.js by Ben Noordhuis (note: not fully compliant)This project is bundled with Node.js v0.6.2+ and io.js v1.0.0+.
Via npm (only required for Node.js releases older than v0.6.2):
Via Bower:
Via Component:
In a browser:
In Node.js, io.js, Narwhal, and RingoJS:
In Rhino:
Using an AMD loader like RequireJS:
require(
{
'paths': {
'punycode': 'path/to/punycode'
}
},
['punycode'],
function(punycode) {
console.log(punycode);
}
);punycode.decode(string)Converts a Punycode string of ASCII symbols to a string of Unicode symbols.
// decode domain name parts
punycode.decode('maana-pta'); // 'mañana'
punycode.decode('--dqo34k'); // '☃-⌘'punycode.encode(string)Converts a string of Unicode symbols to a Punycode string of ASCII symbols.
// encode domain name parts
punycode.encode('mañana'); // 'maana-pta'
punycode.encode('☃-⌘'); // '--dqo34k'punycode.toUnicode(input)Converts a Punycode string representing a domain name or an email address to Unicode. Only the Punycoded parts of the input will be converted, i.e. it doesn’t matter if you call it on a string that has already been converted to Unicode.
// decode domain names
punycode.toUnicode('xn--maana-pta.com');
// → 'mañana.com'
punycode.toUnicode('xn----dqo34k.com');
// → '☃-⌘.com'
// decode email addresses
punycode.toUnicode('джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq');
// → 'джумла@джpумлатест.bрфa'punycode.toASCII(input)Converts a lowercased Unicode string representing a domain name or an email address to Punycode. Only the non-ASCII parts of the input will be converted, i.e. it doesn’t matter if you call it with a domain that’s already in ASCII.
// encode domain names
punycode.toASCII('mañana.com');
// → 'xn--maana-pta.com'
punycode.toASCII('☃-⌘.com');
// → 'xn----dqo34k.com'
// encode email addresses
punycode.toASCII('джумла@джpумлатест.bрфa');
// → 'джумла@xn--p-8sbkgc5ag7bhce.xn--ba-lmcq'punycode.ucs2punycode.ucs2.decode(string)Creates an array containing the numeric code point values of each Unicode symbol in the string. While JavaScript uses UCS-2 internally, this function will convert a pair of surrogate halves (each of which UCS-2 exposes as separate characters) into a single code point, matching UTF-16.
punycode.ucs2.decode('abc');
// → [0x61, 0x62, 0x63]
// surrogate pair for U+1D306 TETRAGRAM FOR CENTRE:
punycode.ucs2.decode('\uD834\uDF06');
// → [0x1D306]punycode.ucs2.encode(codePoints)Creates a string based on an array of numeric code point values.
punycode.ucs2.encode([0x61, 0x62, 0x63]);
// → 'abc'
punycode.ucs2.encode([0x1D306]);
// → '\uD834\uDF06'punycode.versionA string representing the current Punycode.js version number.
After cloning this repository, run npm install --dev to install the dependencies needed for Punycode.js development and testing. You may want to install Istanbul globally using npm install istanbul -g.
Once that’s done, you can run the unit tests in Node using npm test or node tests/tests.js. To run the tests in Rhino, Ringo, Narwhal, PhantomJS, and web browsers as well, use grunt test.
To generate the code coverage report, use grunt cover.
Feel free to fork if you see possible improvements!
| Mathias Bynens |
| John-David Dalton |
A library for efficiently walking a directory recursively.
name, path, dirent and stats (optional).old and modern mode.npm install @nodelib/fs.walk
Reads the directory recursively and asynchronously. Requires a callback function.
:book: If you want to use the Promise API, use
util.promisify.
fsWalk.walk('path', (error, entries) => { /* … */ });
fsWalk.walk('path', {}, (error, entries) => { /* … */ });
fsWalk.walk('path', new fsWalk.Settings(), (error, entries) => { /* … */ });Reads the directory recursively and asynchronously. Readable Stream is used as a provider.
const stream = fsWalk.walkStream('path');
const stream = fsWalk.walkStream('path', {});
const stream = fsWalk.walkStream('path', new fsWalk.Settings());Reads the directory recursively and synchronously. Returns an array of entries.
const entries = fsWalk.walkSync('path');
const entries = fsWalk.walkSync('path', {});
const entries = fsWalk.walkSync('path', new fsWalk.Settings());truestring | Buffer | URLA path to a file. If a URL is provided, it must use the file: protocol.
falseOptions | SettingsSettings classAn Options object or an instance of Settings class.
:book: When you pass a plain object, an instance of the
Settingsclass will be created automatically. If you plan to call the method frequently, use a pre-created instance of theSettingsclass.
A class of full settings of the package.
const settings = new fsWalk.Settings({ followSymbolicLinks: true });
const entries = fsWalk.walkSync('path', settings);name — The name of the entry (unknown.txt).path — The path of the entry relative to call directory (root/unknown.txt).dirent — An instance of fs.Dirent class.stats] — An instance of fs.Stats class.stringundefinedBy default, all paths are built relative to the root path. You can use this option to set custom root path.
In the example below we read the files from the root directory, but in the results the root path will be custom.
fsWalk.walkSync('root'); // → ['root/file.txt']
fsWalk.walkSync('root', { basePath: 'custom' }); // → ['custom/file.txt']numberInfinityThe maximum number of concurrent calls to fs.readdir.
:book: The higher the number, the higher performance and the load on the File System. If you want to read in quiet mode, set the value to
4 * os.cpus().length(4 is default size of thread pool work scheduling).
DeepFilterFunctionundefinedA function that indicates whether the directory will be read deep or not.
// Skip all directories that starts with `node_modules`
const filter: DeepFilterFunction = (entry) => !entry.path.startsWith('node_modules');EntryFilterFunctionundefinedA function that indicates whether the entry will be included to results or not.
// Exclude all `.js` files from results
const filter: EntryFilterFunction = (entry) => !entry.name.endsWith('.js');ErrorFilterFunctionundefinedA function that allows you to skip errors that occur when reading directories.
For example, you can skip ENOENT errors if required:
booleanfalseAdds an instance of fs.Stats class to the Entry.
:book: Always use
fs.readdirwith additionalfs.lstat/fs.statcalls to determine the entry type.
booleanfalseFollow symbolic links or not. Call fs.stat on symbolic link if true.
throwErrorOnBrokenSymbolicLinkbooleantrueThrow an error when symbolic link is broken if true or safely return lstat call if false.
pathSegmentSeparatorstringpath.sepBy default, this package uses the correct path separator for your OS (\ on Windows, / on Unix-like systems). But you can set this option to any separator character(s) that you want to use instead.
fsFileSystemAdapterBy default, the built-in Node.js module (fs) is used to work with the file system. You can replace any method with your own.
interface FileSystemAdapter {
lstat: typeof fs.lstat;
stat: typeof fs.stat;
lstatSync: typeof fs.lstatSync;
statSync: typeof fs.statSync;
readdir: typeof fs.readdir;
readdirSync: typeof fs.readdirSync;
}
const settings = new fsWalk.Settings({
fs: { lstat: fakeLstat }
});See the Releases section of our GitHub project for changelog for each release version.
Enforce best practices for JavaScript promises.
You’ll first need to install ESLint:
npm install eslint --save-dev
Next, install eslint-plugin-promise:
npm install eslint-plugin-promise --save-dev
Note: If you installed ESLint globally (using the -g flag) then you must also install eslint-plugin-promise globally.
Add promise to the plugins section of your .eslintrc.json configuration file. You can omit the eslint-plugin- prefix:
Then configure the rules you want to use under the rules section.
{
"rules": {
"promise/always-return": "error",
"promise/no-return-wrap": "error",
"promise/param-names": "error",
"promise/catch-or-return": "error",
"promise/no-native": "off",
"promise/no-nesting": "warn",
"promise/no-promise-in-callback": "warn",
"promise/no-callback-in-promise": "warn",
"promise/avoid-new": "warn",
"promise/no-new-statics": "error",
"promise/no-return-in-finally": "warn",
"promise/valid-params": "warn"
}
}or start with the recommended rule set:
| rule | description | recommended | fixable |
|---|---|---|---|
catch-or-return |
Enforces the use of catch() on un-returned promises. |
:bangbang: | |
no-return-wrap |
Avoid wrapping values in Promise.resolve or Promise.reject when not needed. |
:bangbang: | |
param-names |
Enforce consistent param names and ordering when creating new promises. | :bangbang: | |
always-return |
Return inside each then() to create readable and reusable Promise chains. |
:bangbang: | |
no-native |
In an ES5 environment, make sure to create a Promise constructor before using. |
||
no-nesting |
Avoid nested then() or catch() statements |
:warning: | |
no-promise-in-callback |
Avoid using promises inside of callbacks | :warning: | |
no-callback-in-promise |
Avoid calling cb() inside of a then() (use nodeify instead) |
:warning: | |
avoid-new |
Avoid creating new promises outside of utility libs (use pify instead) |
||
no-new-statics |
Avoid calling new on a Promise static method |
:bangbang: | :wrench: |
no-return-in-finally |
Disallow return statements in finally() |
:warning: | |
valid-params |
Ensures the proper number of arguments are passed to Promise functions | :warning: | |
prefer-await-to-then |
Prefer await to then() for reading Promise values |
:seven: | |
prefer-await-to-callbacks |
Prefer async/await to the callback pattern | :seven: |
Key
| icon | description |
|---|---|
| :bangbang: | Reports as error in recommended configuration |
| :warning: | Reports as warning in recommended configuration |
| :seven: | ES2017 Async Await rules |
| :wrench: | Rule is fixable with eslint --fix |
[@macklinu]: https://github.com/macklinu [@xjamundx]: https://github.com/xjamundx
URI.js is an RFC 3986 compliant, scheme extendable URI parsing/validating/resolving library for all JavaScript environments (browsers, Node.js, etc). It is also compliant with the IRI (RFC 3987), IDNA (RFC 5890), IPv6 Address (RFC 5952), IPv6 Zone Identifier (RFC 6874) specifications.
URI.js has an extensive test suite, and works in all (Node.js, web) environments. It weighs in at 6.4kb (gzipped, 17kb deflated).
URI.parse(“uri://user:pass@example.com:123/one/two.three?q1=a1&q2=a2#body”); //returns: //{ // scheme : “uri”, // userinfo : “user:pass”, // host : “example.com”, // port : 123, // path : “/one/two.three”, // query : “q1=a1&q2=a2”, // fragment : “body” //}
URI.serialize({scheme : “http”, host : “example.com”, fragment : “footer”}) === “http://example.com/#footer”
URI.resolve(“uri://a/b/c/d?q”, “../../g”) === “uri://a/g”
URI.normalize(“HTTP://ABC.com:80/%7Esmith/home.html”) === “http://abc.com/~smith/home.html”
URI.equal(“example://a/b/c/%7Bfoo%7D”, “eXAMPLE://a/./b/../b/%63/%7bfoo%7d”) === true
//IPv4 normalization URI.normalize(“//192.068.001.000”) === “//192.68.1.0”
//IPv6 normalization URI.normalize(“//[2001:0:0DB8::0:0001]”) === “//[2001:0:db8::1]”
//IPv6 zone identifier support URI.parse(“//[2001:db8::7%25en1]”); //returns: //{ // host : “2001:db8::7%en1” //}
//convert IRI to URI URI.serialize(URI.parse(“http://examplé.org/rosé”)) === “http://xn–exampl-gva.org/ros%C3%A9” //convert URI to IRI URI.serialize(URI.parse(“http://xn–exampl-gva.org/ros%C3%A9”), {iri:true}) === “http://examplé.org/rosé”
All of the above functions can accept an additional options argument that is an object that can contain one or more of the following properties:
scheme (string)
Indicates the scheme that the URI should be treated as, overriding the URI’s normal scheme parsing behavior.
reference (string)
If set to "suffix", it indicates that the URI is in the suffix format, and the validator will use the option’s scheme property to determine the URI’s scheme.
tolerant (boolean, false)
If set to true, the parser will relax URI resolving rules.
absolutePath (boolean, false)
If set to true, the serializer will not resolve a relative path component.
iri (boolean, false)
If set to true, the serializer will unescape non-ASCII characters as per RFC 3987.
If set to true, the parser will unescape non-ASCII characters in the parsed output as per RFC 3987.
domainHost (boolean, false)
If set to true, the library will treat the host component as a domain name, and convert IDNs (International Domain Names) as per RFC 5891.
URI.js supports inserting custom scheme dependent processing rules. Currently, URI.js has built in support for the following schemes:
URI.equal(“HTTP://ABC.COM:80”, “http://abc.com/”) === true URI.equal(“https://abc.com”, “HTTPS://ABC.COM:443/”) === true
URI.parse(“wss://example.com/foo?bar=baz”); //returns: //{ // scheme : “wss”, // host: “example.com”, // resourceName: “/foo?bar=baz”, // secure: true, //}
URI.equal(“WS://ABC.COM:80/chat#one”, “ws://abc.com/chat”) === true
URI.parse(“mailto:alpha@example.com,bravo@example.com?subject=SUBSCRIBE&body=Sign%20me%20up!”); //returns: //{ // scheme : “mailto”, // to : [“alpha@example.com”, “bravo@example.com”], // subject : “SUBSCRIBE”, // body : “Sign me up!” //}
URI.serialize({ scheme : “mailto”, to : [“alpha@example.com”], subject : “REMOVE”, body : “Please remove me”, headers : { cc : “charlie@example.com” } }) === “mailto:alpha@example.com?cc=charlie@example.com&subject=REMOVE&body=Please%20remove%20me”
URI.parse(“urn:example:foo”); //returns: //{ // scheme : “urn”, // nid : “example”, // nss : “foo”, //}
URI.parse(“urn:uuid:f81d4fae-7dec-11d0-a765-00a0c91e6bf6”); //returns: //{ // scheme : “urn”, // nid : “example”, // uuid : “f81d4fae-7dec-11d0-a765-00a0c91e6bf6”, //}
To load in a browser, use the following tag:
To load in a CommonJS/Module environment, first install with npm/yarn by running on the command line:
npm install uri-js # OR yarn add uri-js
Then, in your code, load it using:
const URI = require(“uri-js”);
If you are writing your code in ES6+ (ESNEXT) or TypeScript, you would load it using:
import * as URI from “uri-js”;
Or you can load just what you need using named exports:
import { parse, serialize, resolve, resolveComponents, normalize, equal, removeDotSegments, pctEncChar, pctDecChars, escapeComponent, unescapeComponent } from “uri-js”;
URN parsing has been completely changed to better align with the specification. Scheme is now always urn, but has two new properties: nid which contains the Namspace Identifier, and nss which contains the Namespace Specific String. The nss property will be removed by higher order scheme handlers, such as the UUID URN scheme handler.
The UUID of a URN can now be found in the uuid property.
URI validation has been removed as it was slow, exposed a vulnerabilty, and was generally not useful.
The errors array on parsed components is now an error string.
A minimal matching utility.
This is the matching library used internally by npm.
It works by converting glob expressions into JavaScript RegExp objects.
var minimatch = require("minimatch")
minimatch("bar.foo", "*.foo") // true!
minimatch("bar.foo", "*.bar") // false!
minimatch("bar.foo", "*.+(bar|foo)", { debug: true }) // true, and noisy!** matchingSee:
man shman bashman 3 fnmatchman 5 gitignoreCreate a minimatch object by instantiating the minimatch.Minimatch class.
pattern The original pattern the minimatch object represents.options The options supplied to the constructor.set A 2-dimensional array of regexp or string expressions. Each row in the array corresponds to a brace-expanded pattern. Each item in the row corresponds to a single path-part. For example, the pattern {a,b/c}/d would expand to a set of patterns like:
[ [ a, d ]
, [ b, c, d ] ]
If a portion of the pattern doesn’t have any “magic” in it (that is, it’s something like "foo" rather than fo*o?), then it will be left as a string rather than converted to a regular expression.
regexp Created by the makeRe method. A single regular expression expressing the entire pattern. This is useful in cases where you wish to use the pattern somewhat like fnmatch(3) with FNM_PATH enabled.negate True if the pattern is negated.comment True if the pattern is a comment.empty True if the pattern is "".
makeRe Generate the regexp member if necessary, and return it. Will return false if the pattern is invalid.match(fname) Return true if the filename matches the pattern, or false otherwise.matchOne(fileArray, patternArray, partial) Take a /-split filename, and match it against a single row in the regExpSet. This method is mainly for internal use, but is exposed so that it can be used by a glob-walker that needs to avoid excessive filesystem calls.All other methods are internal, and will be called as necessary.
Main export. Tests a path against the pattern using the options.
Returns a function that tests its supplied argument, suitable for use with Array.filter. Example:
Match against the list of files, in the style of fnmatch or glob. If nothing is matched, and options.nonull is set, then return a list containing the pattern itself.
Make a regular expression object from the pattern.
All options are false by default.
Dump a ton of stuff to stderr.
Do not expand {a,b} and {1..3} brace sets.
Disable ** matching against multiple folder names.
Allow patterns to match filenames starting with a period, even if the pattern does not explicitly have a period in that spot.
Note that by default, a/**/b will not match a/.d/b, unless dot is set.
Disable “extglob” style patterns like +(a|b).
Perform a case-insensitive match.
When a match is not found by minimatch.match, return a list containing the pattern itself if this option is set. When not set, an empty list is returned if there are no matches.
If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.
Suppress the behavior of treating # at the start of a pattern as a comment.
Suppress the behavior of treating a leading ! character as negation.
Returns from negate expressions the same as if they were not negated. (Ie, true on a hit, false on a miss.)
While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between minimatch and other implementations, and are intentional.
If the pattern starts with a ! character, then it is negated. Set the nonegate flag to suppress this behavior, and treat leading ! characters normally. This is perhaps relevant if you wish to start the pattern with a negative extglob pattern like !(a|B). Multiple ! characters at the start of a pattern will negate the pattern multiple times.
If a pattern starts with #, then it is treated as a comment, and will not match anything. Use \# to match a literal # at the start of a line, or set the nocomment flag to suppress this behavior.
The double-star character ** is supported by default, unless the noglobstar flag is set. This is supported in the manner of bsdglob and bash 4.1, where ** only has special significance if it is the only thing in a path part. That is, a/**/b will match a/x/y/b, but a/**b will not.
If an escaped pattern has no matches, and the nonull flag is set, then minimatch.match returns the pattern as-provided, rather than interpreting the character escapes. For example, minimatch.match([], "\\*a\\?") will return "\\*a\\?" rather than "*a?". This is akin to setting the nullglob option in bash, except that it does not resolve escaped pattern characters.
If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like +(a|{b),c)}, which would not be valid in bash or zsh, is expanded first into the set of +(a|b) and +(a|c), and those patterns are checked for validity. Since those two are valid, matching proceeds.
Gets the entire buffer of a stream either as a Buffer or a string. Validates the stream’s length against an expected length and maximum limit. Ideal for parsing request bodies.
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
This module includes a TypeScript declaration file to enable auto complete in compatible editors and type information for TypeScript projects. This module depends on the Node.js types, so install @types/node:
Returns a promise if no callback specified and global Promise exists.
Options:
length - The length of the stream. If the contents of the stream do not add up to this length, an 400 error code is returned.limit - The byte limit of the body. This is the number of bytes or any string format supported by bytes, for example 1000, '500kb' or '3mb'. If the body ends up being larger than this limit, a 413 error code is returned.encoding - The encoding to use to decode the body into a string. By default, a Buffer instance will be returned when no encoding is specified. Most likely, you want utf-8, so setting encoding to true will decode as utf-8. You can use any type of encoding supported by iconv-lite.You can also pass a string in place of options to just specify the encoding.
If an error occurs, the stream will be paused, everything unpiped, and you are responsible for correctly disposing the stream. For HTTP requests, no handling is required if you send a response. For streams that use file descriptors, you should stream.destroy() or stream.close() to prevent leaks.
This module creates errors depending on the error condition during reading. The error may be an error from the underlying Node.js implementation, but is otherwise an error created by this module, which has the following attributes:
limit - the limit in bytes
length and expected - the expected length of the streamreceived - the received bytesencoding - the invalid encodingstatus and statusCode - the corresponding status code for the errortype - the error typeThe errors from this module have a type property which allows for the progamatic determination of the type of error returned.
This error will occur when the encoding option is specified, but the value does not map to an encoding supported by the iconv-lite module.
This error will occur when the limit option is specified, but the stream has an entity that is larger.
This error will occur when the request stream is aborted by the client before reading the body has finished.
This error will occur when the length option is specified, but the stream has emitted more bytes.
This error will occur when the given stream has an encoding set on it, making it a decoded stream. The stream should not have an encoding set and is expected to emit Buffer objects.
var contentType = require('content-type')
var express = require('express')
var getRawBody = require('raw-body')
var app = express()
app.use(function (req, res, next) {
getRawBody(req, {
length: req.headers['content-length'],
limit: '1mb',
encoding: contentType.parse(req).parameters.charset
}, function (err, string) {
if (err) return next(err)
req.text = string
next()
})
})
// now access req.textvar contentType = require('content-type')
var getRawBody = require('raw-body')
var koa = require('koa')
var app = koa()
app.use(function * (next) {
this.text = yield getRawBody(this.req, {
length: this.req.headers['content-length'],
limit: '1mb',
encoding: contentType.parse(this.req).parameters.charset
})
yield next
})
// now access this.textTo use this library as a promise, simply omit the callback and a promise is returned, provided that a global Promise is defined.
var getRawBody = require('raw-body')
var http = require('http')
var server = http.createServer(function (req, res) {
getRawBody(req)
.then(function (buf) {
res.statusCode = 200
res.end(buf.length + ' bytes submitted')
})
.catch(function (err) {
res.statusCode = 500
res.end(err.message)
})
})
server.listen(3000)import * as getRawBody from 'raw-body';
import * as http from 'http';
const server = http.createServer((req, res) => {
getRawBody(req)
.then((buf) => {
res.statusCode = 200;
res.end(buf.length + ' bytes submitted');
})
.catch((err) => {
res.statusCode = err.statusCode;
res.end(err.message);
});
});
server.listen(3000);Easily read/write JSON files in Node.js. Note: this module cannot be used in the browser.
Writing JSON.stringify() and then fs.writeFile() and JSON.parse() with fs.readFile() enclosed in try/catch blocks became annoying.
npm install –save jsonfile
readFile(filename, [options], callback)readFileSync(filename, [options])writeFile(filename, obj, [options], callback)writeFileSync(filename, obj, [options])options (object, default undefined): Pass in any fs.readFile options or set reviver for a JSON reviver. - throws (boolean, default: true). If JSON.parse throws an error, pass this error to the callback. If false, returns null for the object.
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
jsonfile.readFile(file, function (err, obj) {
if (err) console.error(err)
console.dir(obj)
})You can also use this method with promises. The readFile method will return a promise if you do not pass a callback function.
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
jsonfile.readFile(file)
.then(obj => console.dir(obj))
.catch(error => console.error(error))options (object, default undefined): Pass in any fs.readFileSync options or set reviver for a JSON reviver. - throws (boolean, default: true). If an error is encountered reading or parsing the file, throw the error. If false, returns null for the object.
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
console.dir(jsonfile.readFileSync(file))options: Pass in any fs.writeFile options or set replacer for a JSON replacer. Can also pass in spaces, or override EOL string or set finalEOL flag as false to not save the file with EOL at the end.
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFile(file, obj, function (err) {
if (err) console.error(err)
})Or use with promises as follows:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFile(file, obj)
.then(res => {
console.log('Write complete')
})
.catch(error => console.error(error))formatting with spaces:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFile(file, obj, { spaces: 2 }, function (err) {
if (err) console.error(err)
})overriding EOL:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFile(file, obj, { spaces: 2, EOL: '\r\n' }, function (err) {
if (err) console.error(err)
})disabling the EOL at the end of file:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFile(file, obj, { spaces: 2, finalEOL: false }, function (err) {
if (err) console.log(err)
})appending to an existing JSON file:
You can use fs.writeFile option { flag: 'a' } to achieve this.
const jsonfile = require('jsonfile')
const file = '/tmp/mayAlreadyExistedData.json'
const obj = { name: 'JP' }
jsonfile.writeFile(file, obj, { flag: 'a' }, function (err) {
if (err) console.error(err)
})options: Pass in any fs.writeFileSync options or set replacer for a JSON replacer. Can also pass in spaces, or override EOL string or set finalEOL flag as false to not save the file with EOL at the end.
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFileSync(file, obj)formatting with spaces:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFileSync(file, obj, { spaces: 2 })overriding EOL:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFileSync(file, obj, { spaces: 2, EOL: '\r\n' })disabling the EOL at the end of file:
const jsonfile = require('jsonfile')
const file = '/tmp/data.json'
const obj = { name: 'JP' }
jsonfile.writeFileSync(file, obj, { spaces: 2, finalEOL: false })appending to an existing JSON file:
You can use fs.writeFileSync option { flag: 'a' } to achieve this.
const jsonfile = require('jsonfile')
const file = '/tmp/mayAlreadyExistedData.json'
const obj = { name: 'JP' }
jsonfile.writeFileSync(file, obj, { flag: 'a' })Create nested values and any intermediaries using dot notation (
'a.b.c') paths.
Install with npm:
object {object}: The object to set value onprop {string}: The property to set. Dot-notation may be used.value {any}: The value to set on object[prop]Updates and returns the given object:
Escaping with backslashes
Prevent set-value from splitting on a dot by prefixing it with backslashes:
console.log(set({}, 'a\\.b.c', 'd'));
//=> { 'a.b': { c: 'd' } }
console.log(set({}, 'a\\.b\\.c', 'd'));
//=> { 'a.b.c': 'd' }Escaping with double-quotes or single-quotes
Wrap double or single quotes around the string, or part of the string, that should not be split by set-value:
console.log(set({}, '"a.b".c', 'd'));
//=> { 'a.b': { c: 'd' } }
console.log(set({}, "'a.b'.c", "d"));
//=> { 'a.b': { c: 'd' } }
console.log(set({}, '"this/is/a/.file.path"', 'd'));
//=> { 'this/is/a/file.path': 'd' }set-value does not split inside brackets or braces:
console.log(set({}, '[a.b].c', 'd'));
//=> { '[a.b]': { c: 'd' } }
console.log(set({}, "(a.b).c", "d"));
//=> { '(a.b)': { c: 'd' } }
console.log(set({}, "<a.b>.c", "d"));
//=> { '<a.b>': { c: 'd' } }
console.log(set({}, "{a..b}.c", "d"));
//=> { '{a..b}': { c: 'd' } }If there are any regressions please create a bug report. Thanks!
a.b.c) to get a nested value from an object. | homepage'a.b.c') paths. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 59 | jonschlinkert |
| 1 | vadimdemedes |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on June 21, 2017. aws4 —-
A small utility to sign vanilla Node.js http(s) request options using Amazon’s AWS Signature Version 4.
If you want to sign and send AWS requests in a modern browser, or an environment like Cloudflare Workers, then check out aws4fetch – otherwise you can also bundle this library for use in older browsers.
The only AWS service that doesn’t support v4 as of 2020-05-22 is SimpleDB (it only supports AWS Signature Version 2).
It also provides defaults for a number of core AWS headers and request parameters, making it very easy to query AWS services, or build out a fully-featured AWS library.
var https = require('https')
var aws4 = require('aws4')
// to illustrate usage, we'll create a utility function to request and pipe to stdout
function request(opts) { https.request(opts, function(res) { res.pipe(process.stdout) }).end(opts.body || '') }
// aws4 will sign an options object as you'd pass to http.request, with an AWS service and region
var opts = { host: 'my-bucket.s3.us-west-1.amazonaws.com', path: '/my-object', service: 's3', region: 'us-west-1' }
// aws4.sign() will sign and modify these options, ready to pass to http.request
aws4.sign(opts, { accessKeyId: '', secretAccessKey: '' })
// or it can get credentials from process.env.AWS_ACCESS_KEY_ID, etc
aws4.sign(opts)
// for most AWS services, aws4 can figure out the service and region if you pass a host
opts = { host: 'my-bucket.s3.us-west-1.amazonaws.com', path: '/my-object' }
// usually it will add/modify request headers, but you can also sign the query:
opts = { host: 'my-bucket.s3.amazonaws.com', path: '/?X-Amz-Expires=12345', signQuery: true }
// and for services with simple hosts, aws4 can infer the host from service and region:
opts = { service: 'sqs', region: 'us-east-1', path: '/?Action=ListQueues' }
// and if you're using us-east-1, it's the default:
opts = { service: 'sqs', path: '/?Action=ListQueues' }
aws4.sign(opts)
console.log(opts)
/*
{
host: 'sqs.us-east-1.amazonaws.com',
path: '/?Action=ListQueues',
headers: {
Host: 'sqs.us-east-1.amazonaws.com',
'X-Amz-Date': '20121226T061030Z',
Authorization: 'AWS4-HMAC-SHA256 Credential=ABCDEF/20121226/us-east-1/sqs/aws4_request, ...'
}
}
*/
// we can now use this to query AWS
request(opts)
/*
<?xml version="1.0"?>
<ListQueuesResponse xmlns="https://queue.amazonaws.com/doc/2012-11-05/">
...
*/
// aws4 can infer the HTTP method if a body is passed in
// method will be POST and Content-Type: 'application/x-www-form-urlencoded; charset=utf-8'
request(aws4.sign({ service: 'iam', body: 'Action=ListGroups&Version=2010-05-08' }))
/*
<ListGroupsResponse xmlns="https://iam.amazonaws.com/doc/2010-05-08/">
...
*/
// you can specify any custom option or header as per usual
request(aws4.sign({
service: 'dynamodb',
region: 'ap-southeast-2',
method: 'POST',
path: '/',
headers: {
'Content-Type': 'application/x-amz-json-1.0',
'X-Amz-Target': 'DynamoDB_20120810.ListTables'
},
body: '{}'
}))
/*
{"TableNames":[]}
...
*/
// The raw RequestSigner can be used to generate CodeCommit Git passwords
var signer = new aws4.RequestSigner({
service: 'codecommit',
host: 'git-codecommit.us-east-1.amazonaws.com',
method: 'GIT',
path: '/v1/repos/MyAwesomeRepo',
})
var password = signer.getDateTime() + 'Z' + signer.signature()
// see example.js for examples with other servicesCalculates and populates any necessary AWS headers and/or request options on requestOptions. Returns requestOptions as a convenience for chaining.
requestOptions is an object holding the same options that the Node.js http.request function takes.
The following properties of requestOptions are used in the signing or populated if they don’t already exist:
hostname or host (will try to be determined from service and region if not given)method (will use 'GET' if not given or 'POST' if there is a body)path (will use '/' if not given)body (will use '' if not given)service (will try to be calculated from hostname or host if not given)region (will try to be calculated from hostname or host or use 'us-east-1' if not given)signQuery (to sign the query instead of adding an Authorization header, defaults to false)headers['Host'] (will use hostname or host or be calculated if not given)headers['Content-Type'] (will use 'application/x-www-form-urlencoded; charset=utf-8' if not given and there is a body)headers['Date'] (used to calculate the signature date if given, otherwise new Date is used)Your AWS credentials (which can be found in your AWS console) can be specified in one of two ways:
aws4.sign(requestOptions, {
secretAccessKey: "<your-secret-access-key>",
accessKeyId: "<your-access-key-id>",
sessionToken: "<your-session-token>"
})process.env, such as this:export AWS_ACCESS_KEY_ID="<your-access-key-id>"
export AWS_SECRET_ACCESS_KEY="<your-secret-access-key>"
export AWS_SESSION_TOKEN="<your-session-token>"
(will also use AWS_ACCESS_KEY and AWS_SECRET_KEY if available)
The sessionToken property and AWS_SESSION_TOKEN environment variable are optional for signing with IAM STS temporary credentials.
With npm do:
npm install aws4
Can also be used in the browser.
Thanks to [@jed](https://github.com/jed) for his dynamo-client lib where I first committed and subsequently extracted this code.
Also thanks to the official Node.js AWS SDK for giving me a start on implementing the v4 signature.
Clone new AST without extra properties
Returns new clone of originalAst but without extra properties.
Leaves properties defined in The ESTree Spec (formerly known as Mozilla SpiderMonkey Parser API) only. Also note that extra informations (such as loc, range and raw) are eliminated too.
Returns customized function for cloning AST, with user-provided whiteList.
Returns new clone of originalAst by customized function.
| type | default value |
|---|---|
object |
N/A |
whiteList is an object containing NodeType as keys and properties as values.
{
ArrayExpression: ['type', 'elements'],
ArrayPattern: ['type', 'elements'],
ArrowFunctionExpression: ['type', 'id', 'params', 'body', 'generator', 'expression'],
AssignmentExpression: ['type', 'operator', 'left', 'right'],
...Returns customized function for cloning AST, configured by custom options.
Returns new clone of originalAst by customized function.
| type | default value |
|---|---|
object |
{} |
Configuration options. If not passed, default options will be used.
| type | default value |
|---|---|
array of string |
null |
List of extra properties to be left in result AST. For example, functions returned by espurify.customize({extra: ['raw']}) will preserve raw properties of Literal. Functions return by espurify.customize({extra: ['loc', 'range']}) will preserve loc and range properties of each Node.
var espurify = require('espurify'),
estraverse = require('estraverse'),
esprima = require('esprima'),
syntax = estraverse.Syntax,
assert = require('assert');
var jsCode = 'assert("foo")';
// Adding extra informations to AST
var originalAst = esprima.parse(jsCode, {tolerant: true, loc: true, raw: true});
estraverse.replace(originalAst, {
leave: function (currentNode, parentNode) {
if (currentNode.type === syntax.Literal && typeof currentNode.raw !== 'undefined') {
currentNode['x-verbatim-bar'] = {
content : currentNode.raw,
precedence : 18 // escodegen.Precedence.Primary
};
return currentNode;
} else {
return undefined;
}
}
});
// purify AST
var purifiedClone = espurify(originalAst);
// original AST is not modified
assert.deepEqual(originalAst, {
type: 'Program',
body: [
{
type: 'ExpressionStatement',
expression: {
type: 'CallExpression',
callee: {
type: 'Identifier',
name: 'assert',
loc: {
start: {
line: 1,
column: 0
},
end: {
line: 1,
column: 6
}
}
},
arguments: [
{
type: 'Literal',
value: 'foo',
raw: '"foo"',
loc: {
start: {
line: 1,
column: 7
},
end: {
line: 1,
column: 12
}
},
"x-verbatim-bar": {
content: '"foo"',
precedence: 18
}
}
],
loc: {
start: {
line: 1,
column: 0
},
end: {
line: 1,
column: 13
}
}
},
loc: {
start: {
line: 1,
column: 0
},
end: {
line: 1,
column: 13
}
}
}
],
loc: {
start: {
line: 1,
column: 0
},
end: {
line: 1,
column: 13
}
},
errors: []
});
// Extra properties are eliminated from cloned AST
assert.deepEqual(purifiedClone, {
type: 'Program',
body: [
{
type: 'ExpressionStatement',
expression: {
type: 'CallExpression',
callee: {
type: 'Identifier',
name: 'assert'
},
arguments: [
{
type: 'Literal',
value: 'foo'
}
]
}
}
]
});Install
npm install –save espurify
Use
Returns true if the value is a finite number.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
In JavaScript, it’s not always as straightforward as it should be to reliably check if a value is a number. It’s common for devs to use +, -, or Number() to cast a string value to a number (for example, when values are returned from user input, regex matches, parsers, etc). But there are many non-intuitive edge cases that yield unexpected results:
console.log(+[]); //=> 0
console.log(+''); //=> 0
console.log(+' '); //=> 0
console.log(typeof NaN); //=> 'number'This library offers a performant way to smooth out edge cases like these.
See the tests for more examples.
isNumber(5e3); // true
isNumber(0xff); // true
isNumber(-1.1); // true
isNumber(0); // true
isNumber(1); // true
isNumber(1.1); // true
isNumber(10); // true
isNumber(10.10); // true
isNumber(100); // true
isNumber('-1.1'); // true
isNumber('0'); // true
isNumber('012'); // true
isNumber('0xff'); // true
isNumber('1'); // true
isNumber('1.1'); // true
isNumber('10'); // true
isNumber('10.10'); // true
isNumber('100'); // true
isNumber('5e3'); // true
isNumber(parseInt('012')); // true
isNumber(parseFloat('012')); // trueEverything else is false, as you would expect:
isNumber(Infinity); // false
isNumber(NaN); // false
isNumber(null); // false
isNumber(undefined); // false
isNumber(''); // false
isNumber(' '); // false
isNumber('foo'); // false
isNumber([1]); // false
isNumber([]); // false
isNumber(function () {}); // false
isNumber({}); // false.isFinite if it exists.Breaking changes
instanceof Number and instanceof StringAs with all benchmarks, take these with a grain of salt. See the benchmarks for more detail.
# all
v7.0 x 413,222 ops/sec ±2.02% (86 runs sampled)
v6.0 x 111,061 ops/sec ±1.29% (85 runs sampled)
parseFloat x 317,596 ops/sec ±1.36% (86 runs sampled)
fastest is 'v7.0'
# string
v7.0 x 3,054,496 ops/sec ±1.05% (89 runs sampled)
v6.0 x 2,957,781 ops/sec ±0.98% (88 runs sampled)
parseFloat x 3,071,060 ops/sec ±1.13% (88 runs sampled)
fastest is 'parseFloat,v7.0'
# number
v7.0 x 3,146,895 ops/sec ±0.89% (89 runs sampled)
v6.0 x 3,214,038 ops/sec ±1.07% (89 runs sampled)
parseFloat x 3,077,588 ops/sec ±1.07% (87 runs sampled)
fastest is 'v6.0'
Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
Object constructor. | homepagetrue if the value is a primitive. | homepage| Commits | Contributor |
|---|---|
| 49 | jonschlinkert |
| 5 | charlike-old |
| 1 | benaadams |
| 1 | realityking |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on June 15, 2018. ## Pure JS character encoding conversion
npm install two more modules: buffer and stream).var iconv = require('iconv-lite');
// Convert from an encoded buffer to js string.
str = iconv.decode(Buffer.from([0x68, 0x65, 0x6c, 0x6c, 0x6f]), 'win1251');
// Convert from js string to an encoded buffer.
buf = iconv.encode("Sample input string", 'win1251');
// Check if encoding is supported
iconv.encodingExists("us-ascii")
// Decode stream (from binary stream to js strings)
http.createServer(function(req, res) {
var converterStream = iconv.decodeStream('win1251');
req.pipe(converterStream);
converterStream.on('data', function(str) {
console.log(str); // Do something with decoded strings, chunk-by-chunk.
});
});
// Convert encoding streaming example
fs.createReadStream('file-in-win1251.txt')
.pipe(iconv.decodeStream('win1251'))
.pipe(iconv.encodeStream('ucs2'))
.pipe(fs.createWriteStream('file-in-ucs2.txt'));
// Sugar: all encode/decode streams have .collect(cb) method to accumulate data.
http.createServer(function(req, res) {
req.pipe(iconv.decodeStream('win1251')).collect(function(err, body) {
assert(typeof body == 'string');
console.log(body); // full request body string
});
});NOTE: This doesn’t work on latest Node versions. See details.
// After this call all Node basic primitives will understand iconv-lite encodings.
iconv.extendNodeEncodings();
// Examples:
buf = new Buffer(str, 'win1251');
buf.write(str, 'gbk');
str = buf.toString('latin1');
assert(Buffer.isEncoding('iso-8859-15'));
Buffer.byteLength(str, 'us-ascii');
http.createServer(function(req, res) {
req.setEncoding('big5');
req.collect(function(err, body) {
console.log(body);
});
});
fs.createReadStream("file.txt", "shift_jis");
// External modules are also supported (if they use Node primitives, which they probably do).
request = require('request');
request({
url: "http://github.com/",
encoding: "cp932"
});
// To remove extensions
iconv.undoExtendNodeEncodings();Most singlebyte encodings are generated automatically from node-iconv. Thank you Ben Noordhuis and libiconv authors!
Multibyte encodings are generated from Unicode.org mappings and WHATWG Encoding Standard mappings. Thank you, respective authors!
Comparison with node-iconv module (1000x256kb, on MacBook Pro, Core i5/2.6 GHz, Node v0.12.0). Note: your results may vary, so please always check on your hardware.
operation iconv@2.1.4 iconv-lite@0.4.7 ———————————————————- encode(‘win1251’) ~96 Mb/s ~320 Mb/s decode(‘win1251’) ~95 Mb/s ~246 Mb/s
stripBOM: false in options (f.ex. iconv.decode(buf, enc, {stripBOM: false})). A callback might also be given as a stripBOM parameter - it’ll be called if BOM character was actually found.addBOM: true option.This library supports UTF-16LE, UTF-16BE and UTF-16 encodings. First two are straightforward, but UTF-16 is trying to be smart about endianness in the following ways: * Decoding: uses BOM and ‘spaces heuristic’ to determine input endianness. Default is UTF-16LE, but can be overridden with defaultEncoding: 'utf-16be' option. Strips BOM unless stripBOM: false. * Encoding: uses UTF-16LE and writes BOM by default. Use addBOM: false to override.
When decoding, be sure to supply a Buffer to decode() method, otherwise bad things usually happen.
Untranslatable characters are set to � or ?. No transliteration is currently supported.
Node versions 0.10.31 and 0.11.13 are buggy, don’t use them (see #65, #77).
$ git clone git@github.com:ashtuchkin/iconv-lite.git
$ cd iconv-lite
$ npm install
$ npm test
$ # To view performance:
$ node test/performance.js
$ # To view test coverage:
$ npm run coverage
$ open coverage/lcov-report/index.htmlnpm install two more modules: buffer and stream).var iconv = require('iconv-lite');
// Convert from an encoded buffer to js string.
str = iconv.decode(Buffer.from([0x68, 0x65, 0x6c, 0x6c, 0x6f]), 'win1251');
// Convert from js string to an encoded buffer.
buf = iconv.encode("Sample input string", 'win1251');
// Check if encoding is supported
iconv.encodingExists("us-ascii")
// Decode stream (from binary stream to js strings)
http.createServer(function(req, res) {
var converterStream = iconv.decodeStream('win1251');
req.pipe(converterStream);
converterStream.on('data', function(str) {
console.log(str); // Do something with decoded strings, chunk-by-chunk.
});
});
// Convert encoding streaming example
fs.createReadStream('file-in-win1251.txt')
.pipe(iconv.decodeStream('win1251'))
.pipe(iconv.encodeStream('ucs2'))
.pipe(fs.createWriteStream('file-in-ucs2.txt'));
// Sugar: all encode/decode streams have .collect(cb) method to accumulate data.
http.createServer(function(req, res) {
req.pipe(iconv.decodeStream('win1251')).collect(function(err, body) {
assert(typeof body == 'string');
console.log(body); // full request body string
});
});NOTE: This doesn’t work on latest Node versions. See details.
// After this call all Node basic primitives will understand iconv-lite encodings.
iconv.extendNodeEncodings();
// Examples:
buf = new Buffer(str, 'win1251');
buf.write(str, 'gbk');
str = buf.toString('latin1');
assert(Buffer.isEncoding('iso-8859-15'));
Buffer.byteLength(str, 'us-ascii');
http.createServer(function(req, res) {
req.setEncoding('big5');
req.collect(function(err, body) {
console.log(body);
});
});
fs.createReadStream("file.txt", "shift_jis");
// External modules are also supported (if they use Node primitives, which they probably do).
request = require('request');
request({
url: "http://github.com/",
encoding: "cp932"
});
// To remove extensions
iconv.undoExtendNodeEncodings();Most singlebyte encodings are generated automatically from node-iconv. Thank you Ben Noordhuis and libiconv authors!
Multibyte encodings are generated from Unicode.org mappings and WHATWG Encoding Standard mappings. Thank you, respective authors!
Comparison with node-iconv module (1000x256kb, on MacBook Pro, Core i5/2.6 GHz, Node v0.12.0). Note: your results may vary, so please always check on your hardware.
operation iconv@2.1.4 iconv-lite@0.4.7 ———————————————————- encode(‘win1251’) ~96 Mb/s ~320 Mb/s decode(‘win1251’) ~95 Mb/s ~246 Mb/s
stripBOM: false in options (f.ex. iconv.decode(buf, enc, {stripBOM: false})). A callback might also be given as a stripBOM parameter - it’ll be called if BOM character was actually found.addBOM: true option.This library supports UTF-16LE, UTF-16BE and UTF-16 encodings. First two are straightforward, but UTF-16 is trying to be smart about endianness in the following ways: * Decoding: uses BOM and ‘spaces heuristic’ to determine input endianness. Default is UTF-16LE, but can be overridden with defaultEncoding: 'utf-16be' option. Strips BOM unless stripBOM: false. * Encoding: uses UTF-16LE and writes BOM by default. Use addBOM: false to override.
When decoding, be sure to supply a Buffer to decode() method, otherwise bad things usually happen.
Untranslatable characters are set to � or ?. No transliteration is currently supported.
Node versions 0.10.31 and 0.11.13 are buggy, don’t use them (see #65, #77).
$ git clone git@github.com:ashtuchkin/iconv-lite.git
$ cd iconv-lite
$ npm install
$ npm test
$ # To view performance:
$ node test/performance.js
$ # To view test coverage:
$ npm run coverage
$ open coverage/lcov-report/index.htmlTrie implementation in javascript. Each Trie node holds one character of a word.
| Trie |
|---|
|
insert a string word into the trie.
| params | |
|---|---|
| name | type |
| word | string |
| return |
|---|
| TrieNode |
| runtime |
|---|
| O(k) : k = length of the word |
englishLang.insert('hi');
englishLang.insert('hit');
englishLang.insert('hide');
englishLang.insert('hello');
englishLang.insert('sand');
englishLang.insert('safe');
englishLang.insert('noun');
englishLang.insert('name');Note: the empty string is not a default word in the trie. You can add the empty word explicitly using .insert('')
checks if a word exists in the trie.
| params | |
|---|---|
| name | type |
| word | string |
| return |
|---|
| boolean |
| runtime |
|---|
| O(k) : k = length of the word |
finds a word in the trie and returns the node of its last character.
| params | |
|---|---|
| name | type |
| word | string |
| return |
|---|
| TrieNode |
| runtime |
|---|
| O(k) : k = length of the word |
const hi = englishLang.find('hi');
// hi.getChar() = 'i'
// hi.getParent().getChar() = 'h'
const safe = englishLang.find('safe');
// safe.getChar() = 'e'
// safe.getParent().getChar() = 'f'
// safe.getParent().getParent().getChar() = 'a'removes a word from the trie.
| params | |
|---|---|
| name | type |
| word | string |
| return |
|---|
| boolean |
| runtime |
|---|
| O(k) : k = length of the word |
englishLang.remove('hi'); // true - hi removed
englishLang.remove('sky'); // false - nothing is removedtraverses all words in the trie.
| params | ||
|---|---|---|
| name | type | description |
| cb | function | called with each word in the trie |
| runtime |
|---|
| O(n) : n = number of nodes in the trie |
converts the trie into an array of words.
| return | description |
|---|---|
| array | a list of all the words in the trie |
| runtime |
|---|
| O(n) : n = number of nodes in the trie |
gets the count of words in the trie.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
gets the count of nodes in the trie.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
clears the trie.
| runtime |
|---|
| O(1) |
englishLang.clear();
console.log(englishLang.wordsCount()); // 0
console.log(englishLang.nodesCount()); // 1returns the node’s char.
| return |
|---|
| string |
returns the parent node.
| return |
|---|
| TrieNode |
check if a node is an end of a word.
| return |
|---|
| boolean |
returns the child node of a char.
| return |
|---|
| TrieNode |
check the node has a child char.
| return |
|---|
| boolean |
returns the number of children nodes.
| return |
|---|
| number |
grunt build
Doctrine is a JSDoc parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file).
You can install Doctrine using npm:
npm install doctrine --save-dev
Doctrine can also be used in web browsers using Browserify.
Require doctrine inside of your JavaScript:
The primary method is parse(), which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are:
unwrap - set to true to delete the leading /**, any * that begins a line, and the trailing */ from the source text. Default: false.tags - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if tags is ["param"], then only @param tags will be returned. Default: null.recoverable - set to true to keep parsing even when syntax errors occur. Default: false.sloppy - set to true to allow optional parameters to be specified in brackets (@param {string} [foo]). Default: false.lineNumbers - set to true to add lineNumber to each node, specifying the line on which the node is found in the source. Default: false.range - set to true to add range to each node, specifying the start and end index of the node in the original comment. Default: false.Here’s a simple example:
var ast = doctrine.parse(
[
"/**",
" * This function comment is parsed by doctrine",
" * @param {{ok:String}} userName",
"*/"
].join('\n'), { unwrap: true });This example returns the following AST:
{ “description”: “This function comment is parsed by doctrine”, “tags”: [ { “title”: “param”, “description”: null, “type”: { “type”: “RecordType”, “fields”: [ { “type”: “FieldType”, “key”: “ok”, “value”: { “type”: “NameExpression”, “name”: “String” } } ] }, “name”: “userName” } ] }
See the demo page more detail.
These folks keep the project moving and are resources for help:
Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the ESLint Contributor Guidelines, so please be sure to read them before contributing. If you’re not sure where to dig in, check out the issues.
No. Doctrine can only parse JSDoc comments, so you’ll need to pass just the JSDoc comment to Doctrine in order to work.
some of functions is derived from esprima
some of extensions is derived from closure-compiler
Join our Chatroom
Generate a regex from a string or array of strings.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
(TOC generated by verb using markdown-toc)
Install with npm:
var toRegex = require('to-regex');
console.log(toRegex('foo'));
//=> /^(?:foo)$/
console.log(toRegex('foo', {negate: true}));
//=> /^(?:(?:(?!^(?:foo)$).)*)$/
console.log(toRegex('foo', {contains: true}));
//=> /(?:foo)/
console.log(toRegex(['foo', 'bar'], {negate: true}));
//=> /^(?:(?:(?!^(?:(?:foo)|(?:bar))$).)*)$/
console.log(toRegex(['foo', 'bar'], {negate: true, contains: true}));
//=> /^(?:(?:(?!(?:(?:foo)|(?:bar))).)*)$/Type: Boolean
Default: undefined
Generate a regex that will match any string that contains the given pattern. By default, regex is strict will only return true for exact matches.
Type: Boolean
Default: undefined
Create a regex that will match everything except the given pattern.
var toRegex = require('to-regex');
console.log(toRegex('foo', {negate: true}));
//=> /^(?:(?:(?!^(?:foo)$).)*)$/Type: Boolean
Default: undefined
Adds the i flag, to enable case-insensitive matching.
Alternatively you can pass the flags you want directly on options.flags.
Type: String
Default: undefined
Define the flags you want to use on the generated regex.
var toRegex = require('to-regex');
console.log(toRegex('foo', {flags: 'gm'}));
//=> /^(?:foo)$/gm
console.log(toRegex('foo', {flags: 'gmi', nocase: true})); //<= handles redundancy
//=> /^(?:foo)$/gmiType: Boolean
Default: true
Generated regex is cached based on the provided string and options. As a result, runtime compilation only happens once per pattern (as long as options are also the same), which can result in dramatic speed improvements.
This also helps with debugging, since adding options and pattern are added to the generated regex.
Disable caching
Type: Boolean
Default: undefined
Check the generated regular expression with safe-regex and throw an error if the regex is potentially unsafe.
Examples
console.log(toRegex('(x+x+)+y'));
//=> /^(?:(x+x+)+y)$/
// The following would throw an error
toRegex('(x+x+)+y', {safe: true});Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
true if an array has a glob pattern. | homepagetrue if the given string looks like a glob pattern or an extglob pattern… more | homepageJon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on February 24, 2018. # psl (Public Suffix List)
psl is a JavaScript domain name parser based on the Public Suffix List.
This implementation is tested against the test data hosted by Mozilla and kindly provided by Comodo.
Cross browser testing provided by
The Public Suffix List is a cross-vendor initiative to provide an accurate list of domain name suffixes.
A “public suffix” is one under which Internet users can directly register names. Some examples of public suffixes are “.com”, “.co.uk” and “pvt.k12.wy.us”. The Public Suffix List is a list of all known public suffixes.
Source: http://publicsuffix.org
Download psl.min.js and include it in a script tag.
This script is browserified and wrapped in a umd wrapper so you should be able to use it standalone or together with a module loader.
psl.parse(domain)Parse domain based on Public Suffix List. Returns an Object with the following properties:
tld: Top level domain (this is the public suffix).sld: Second level domain (the first private part of the domain name).domain: The domain name is the sld + tld.subdomain: Optional parts left of the domain.var psl = require('psl');
// Parse domain without subdomain
var parsed = psl.parse('google.com');
console.log(parsed.tld); // 'com'
console.log(parsed.sld); // 'google'
console.log(parsed.domain); // 'google.com'
console.log(parsed.subdomain); // null
// Parse domain with subdomain
var parsed = psl.parse('www.google.com');
console.log(parsed.tld); // 'com'
console.log(parsed.sld); // 'google'
console.log(parsed.domain); // 'google.com'
console.log(parsed.subdomain); // 'www'
// Parse domain with nested subdomains
var parsed = psl.parse('a.b.c.d.foo.com');
console.log(parsed.tld); // 'com'
console.log(parsed.sld); // 'foo'
console.log(parsed.domain); // 'foo.com'
console.log(parsed.subdomain); // 'a.b.c.d'psl.get(domain)Get domain name, sld + tld. Returns null if not valid.
var psl = require('psl');
// null input.
psl.get(null); // null
// Mixed case.
psl.get('COM'); // null
psl.get('example.COM'); // 'example.com'
psl.get('WwW.example.COM'); // 'example.com'
// Unlisted TLD.
psl.get('example'); // null
psl.get('example.example'); // 'example.example'
psl.get('b.example.example'); // 'example.example'
psl.get('a.b.example.example'); // 'example.example'
// TLD with only 1 rule.
psl.get('biz'); // null
psl.get('domain.biz'); // 'domain.biz'
psl.get('b.domain.biz'); // 'domain.biz'
psl.get('a.b.domain.biz'); // 'domain.biz'
// TLD with some 2-level rules.
psl.get('uk.com'); // null);
psl.get('example.uk.com'); // 'example.uk.com');
psl.get('b.example.uk.com'); // 'example.uk.com');
// More complex TLD.
psl.get('c.kobe.jp'); // null
psl.get('b.c.kobe.jp'); // 'b.c.kobe.jp'
psl.get('a.b.c.kobe.jp'); // 'b.c.kobe.jp'
psl.get('city.kobe.jp'); // 'city.kobe.jp'
psl.get('www.city.kobe.jp'); // 'city.kobe.jp'
// IDN labels.
psl.get('食狮.com.cn'); // '食狮.com.cn'
psl.get('食狮.公司.cn'); // '食狮.公司.cn'
psl.get('www.食狮.公司.cn'); // '食狮.公司.cn'
// Same as above, but punycoded.
psl.get('xn--85x722f.com.cn'); // 'xn--85x722f.com.cn'
psl.get('xn--85x722f.xn--55qx5d.cn'); // 'xn--85x722f.xn--55qx5d.cn'
psl.get('www.xn--85x722f.xn--55qx5d.cn'); // 'xn--85x722f.xn--55qx5d.cn'psl.isValid(domain)Check whether a domain has a valid Public Suffix. Returns a Boolean indicating whether the domain has a valid Public Suffix.
var psl = require('psl');
psl.isValid('google.com'); // true
psl.isValid('www.google.com'); // true
psl.isValid('x.yz'); // falseTest are written using mocha and can be run in two different environments: node and phantomjs.
# This will run `eslint`, `mocha` and `karma`.
npm test
# Individual test environments
# Run tests in node only.
./node_modules/.bin/mocha test
# Run tests in phantomjs only.
./node_modules/.bin/karma start ./karma.conf.js --single-run
# Build data (parse raw list) and create dist files
npm run buildFeel free to fork if you see possible improvements!
esutils (esutils) is utility box for ECMAScript language tools.
Returns true if node is an Expression as defined in ECMA262 edition 5.1 section 11.
Returns true if node is a Statement as defined in ECMA262 edition 5.1 section 12.
Returns true if node is an IterationStatement as defined in ECMA262 edition 5.1 section 12.6.
Returns true if node is a SourceElement as defined in ECMA262 edition 5.1 section 14.
Returns Statement? if node has trailing Statement.
When taking this IfStatement, returns consequent; statement.
Returns true if node is a problematic IfStatement. If node is a problematic IfStatement, node cannot be represented as an one on one JavaScript code.
{
type: 'IfStatement',
consequent: {
type: 'WithStatement',
body: {
type: 'IfStatement',
consequent: {type: 'EmptyStatement'}
}
},
alternate: {type: 'EmptyStatement'}
}The above node cannot be represented as a JavaScript code, since the top level else alternate belongs to an inner IfStatement.
Return true if provided code is decimal digit.
Return true if provided code is hexadecimal digit.
Return true if provided code is octal digit.
Return true if provided code is white space. White space characters are formally defined in ECMA262.
Return true if provided code is line terminator. Line terminator characters are formally defined in ECMA262.
Return true if provided code can be the first character of ECMA262 Identifier. They are formally defined in ECMA262.
Return true if provided code can be the trailing character of ECMA262 Identifier. They are formally defined in ECMA262.
Returns true if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 sections 7.6.1.1 and 7.6.1.2, respectively. If the strict flag is truthy, this function additionally checks whether id is a Keyword or Future Reserved Word under strict mode.
Returns true if provided identifier string is a Keyword or Future Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 sections 11.6.2.1 and 11.6.2.2, respectively. If the strict flag is truthy, this function additionally checks whether id is a Keyword or Future Reserved Word under strict mode.
Returns true if provided identifier string is a Reserved Word in ECMA262 edition 5.1. They are formally defined in ECMA262 section 7.6.1. If the strict flag is truthy, this function additionally checks whether id is a Reserved Word under strict mode.
Returns true if provided identifier string is a Reserved Word in ECMA262 edition 6. They are formally defined in ECMA262 section 11.6.2. If the strict flag is truthy, this function additionally checks whether id is a Reserved Word under strict mode.
Returns true if provided identifier string is one of eval or arguments. They are restricted in strict mode code throughout ECMA262 edition 5.1 and in ECMA262 edition 6 section 12.1.1.
Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 5.1 section 7.6.
Return true if provided identifier string is an IdentifierName as specified in ECMA262 edition 6 section 11.6.
Return true if provided identifier string is an Identifier as specified in ECMA262 edition 5.1 section 7.6. If the strict flag is truthy, this function additionally checks whether id is an Identifier under strict mode.
Return true if provided identifier string is an Identifier as specified in ECMA262 edition 6 section 12.1. If the strict flag is truthy, this function additionally checks whether id is an Identifier under strict mode.
Recursive version of fs.readdir. Exposes a stream API and a promise API.
const readdirp = require('readdirp');
// Use streams to achieve small RAM & CPU footprint.
// 1) Streams example with for-await.
for await (const entry of readdirp('.')) {
const {path} = entry;
console.log(`${JSON.stringify({path})}`);
}
// 2) Streams example, non for-await.
// Print out all JS files along with their size within the current folder & subfolders.
readdirp('.', {fileFilter: '*.js', alwaysStat: true})
.on('data', (entry) => {
const {path, stats: {size}} = entry;
console.log(`${JSON.stringify({path, size})}`);
})
// Optionally call stream.destroy() in `warn()` in order to abort and cause 'close' to be emitted
.on('warn', error => console.error('non-fatal error', error))
.on('error', error => console.error('fatal error', error))
.on('end', () => console.log('done'));
// 3) Promise example. More RAM and CPU than streams / for-await.
const files = await readdirp.promise('.');
console.log(files.map(file => file.path));
// Other options.
readdirp('test', {
fileFilter: '*.js',
directoryFilter: ['!.git', '!*modules']
// directoryFilter: (di) => di.basename.length === 9
type: 'files_directories',
depth: 1
});For more examples, check out examples directory.
const stream = readdirp(root[, options]) — Stream API
stream of entry infosfor await (const entry of stream) with node.js 10+ (asyncIterator).on('data', (entry) => {}) entry info for every file / dir.on('warn', (error) => {}) non-fatal Error that prevents a file / dir from being processed. Example: inaccessible to the user.on('end') — we are done. Called when all entries were found and no more will be emitted.on('close') — stream is destroyed via stream.destroy(). Could be useful if you want to manually abort even on a non fatal error. At that point the stream is no longer readable and no more entries, warning or errors are emittedconst entries = await readdirp.promise(root[, options]) — Promise API. Returns a list of entry infos.
First argument is awalys root, path in which to start reading and recursing into subdirectories.
fileFilter: ["*.js"]: filter to include or exclude files. A Function, Glob string or Array of glob strings.
*.js) which is matched using picomatch, so go there for more information. Globstars (**) are not supported since specifying a recursive pattern for an already recursive function doesn’t make sense. Negated globs (as explained in the minimatch documentation) are allowed, e.g., !*.txt matches everything but text files.['*.json', '*.js'] includes all JavaScript and Json files. ['!.git', '!node_modules'] includes all directories except the ‘.git’ and ‘node_modules’.directoryFilter: ['!.git']: filter to include/exclude directories found and to recurse into. Directories that do not pass a filter will not be recursed into.depth: 5: depth at which to stop recursing even if more subdirectories are foundtype: 'files': determines if data events on the stream should be emitted for 'files' (default), 'directories', 'files_directories', or 'all'. Setting to 'all' will also include entries for other types of file descriptors like character devices, unix sockets and named pipes.alwaysStat: false: always return stats property for every file. Default is false, readdirp will return Dirent entries. Setting it to true can double readdir execution time - use it only when you need file size, mtime etc. Cannot be enabled on node <10.10.0.lstat: false: include symlink entries in the stream along with files. When true, fs.lstat would be used instead of fs.statEntryInfoHas the following properties:
path: 'assets/javascripts/react.js': path to the file/directory (relative to given root)fullPath: '/Users/dev/projects/app/assets/javascripts/react.js': full path to the file/directory foundbasename: 'react.js': name of the file/directorydirent: fs.Dirent: built-in dir entry object - only with alwaysStat: falsestats: fs.Stats: built in stat object - only with alwaysStat: truehighWaterMark option. Fixes race conditions related to for-await looping.bigint support to stat output on Windows. This is backwards-incompatible for some cases. Be careful. It you use it incorrectly, you’ll see “TypeError: Cannot mix BigInt and other types, use explicit conversions”.readdirp(options) to readdirp(root, options)entryType option to typeentryType: 'both' to 'files_directories'EntryInfo
stat to stats
alwaysStat: truedirent is emitted instead of stats by default with alwaysStat: falsename to basenameparentDir and fullParentDir propertiesRuns Prettier as an ESLint rule and reports differences as individual ESLint issues.
If your desired formatting does not match Prettier’s output, you should use a different tool such as prettier-eslint instead.
error: Insert `,` (prettier/prettier) at pkg/commons-atom/ActiveEditorRegistry.js:22:25:
20 | import {
21 | observeActiveEditorsDebounced,
> 22 | editorChangesDebounced
| ^
23 | } from './debounced';;
24 |
25 | import {observableFromSubscribeFunction} from '../commons-node/event';
error: Delete `;` (prettier/prettier) at pkg/commons-atom/ActiveEditorRegistry.js:23:21:
21 | observeActiveEditorsDebounced,
22 | editorChangesDebounced
> 23 | } from './debounced';;
| ^
24 |
25 | import {observableFromSubscribeFunction} from '../commons-node/event';
26 | import {cacheWhileSubscribed} from '../commons-node/observable';
2 errors found.
./node_modules/.bin/eslint --format codeframe pkg/commons-atom/ActiveEditorRegistry.js(code from nuclide).
eslint-plugin-prettier does not install Prettier or ESLint for you. You must install these yourself.
Then, in your .eslintrc.json:
This plugin works best if you disable all other ESLint rules relating to code formatting, and only enable rules that detect potential bugs. (If another active ESLint rule disagrees with prettier about how code should be formatted, it will be impossible to avoid lint errors.) You can use eslint-config-prettier to disable all formatting-related ESLint rules.
This plugin ships with a plugin:prettier/recommended config that sets up both the plugin and eslint-config-prettier in one go.
In addition to the above installation instructions, install eslint-config-prettier:
Then you need to add plugin:prettier/recommended as the last extension in your .eslintrc.json:
You can then set Prettier’s own options inside a .prettierrc file.
Some ESLint plugins (such as eslint-plugin-react) also contain rules that conflict with Prettier. Add extra exclusions for the plugins you use like so:
For the list of every available exclusion rule set, please see the readme of eslint-config-prettier.
Exactly what does plugin:prettier/recommended do? Well, this is what it expands to:
{
"extends": ["prettier"],
"plugins": ["prettier"],
"rules": {
"prettier/prettier": "error",
"arrow-body-style": "off",
"prefer-arrow-callback": "off"
}
}"extends": ["prettier"] enables the main config from eslint-config-prettier, which turns off some ESLint core rules that conflict with Prettier."plugins": ["prettier"] registers this plugin."prettier/prettier": "error" turns on the rule provided by this plugin, which runs Prettier from within ESLint."arrow-body-style": "off" and "prefer-arrow-callback": "off" turns off two ESLint core rules that unfortunately are problematic with this plugin – see the next section.arrow-body-style and prefer-arrow-callback issueIf you use arrow-body-style or prefer-arrow-callback together with the prettier/prettier rule from this plugin, you can in some cases end up with invalid code due to a bug in ESLint’s autofix – see issue #65.
For this reason, it’s recommended to turn off these rules. The plugin:prettier/recommended config does that for you.
You can still use these rules together with this plugin if you want, because the bug does not occur all the time. But if you do, you need to keep in mind that you might end up with invalid code, where you manually have to insert a missing closing parenthesis to get going again.
If you’re fixing large of amounts of previously unformatted code, consider temporarily disabling the prettier/prettier rule and running eslint --fix and prettier --write separately.
Note: While it is possible to pass options to Prettier via your ESLint configuration file, it is not recommended because editor extensions such as
prettier-atomandprettier-vscodewill read.prettierrc, but won’t read settings from ESLint, which can lead to an inconsistent experience.
The first option:
An object representing options that will be passed into prettier. Example:
NB: This option will merge and override any config set with .prettierrc files
The second option:
An object with the following options
usePrettierrc: Enables loading of the Prettier configuration file, (default: true). May be useful if you are using multiple tools that conflict with each other, or do not wish to mix your ESLint settings with your Prettier configuration.
fileInfoOptions: Options that are passed to prettier.getFileInfo to decide whether a file needs to be formatted. Can be used for example to opt-out from ignoring files located in node_modules directories.
The rule is autofixable – if you run eslint with the --fix flag, your code will be formatted according to prettier style.
See CONTRIBUTING.md
This module is an implementation of Node’s native http module for the browser. It tries to match Node’s API and behavior as closely as possible, but some features aren’t available, since browsers don’t give nearly as much control over requests.
This is heavily inspired by, and intended to replace, http-browserify.
In accordance with its name, stream-http tries to provide data to its caller before the request has completed whenever possible.
Backpressure, allowing the browser to only pull data from the server as fast as it is consumed, is supported in: * Chrome >= 58 (using fetch and WritableStream)
The following browsers support true streaming, where only a small amount of the request has to be held in memory at once: * Chrome >= 43 (using the fetch API) * Firefox >= 9 (using moz-chunked-arraybuffer responseType with xhr)
The following browsers support pseudo-streaming, where the data is available before the request finishes, but the entire response must be held in memory: * Chrome * Safari >= 5, and maybe older * IE >= 10 * Most other Webkit-based browsers, including the default Android browser
All browsers newer than IE8 support binary responses. All of the above browsers that support true streaming or pseudo-streaming support that for binary data as well except for IE10. Old (presto-based) Opera also does not support binary streaming either.
As of version 2.0.0, IE8 support requires the user to supply polyfills for Object.keys, Array.prototype.forEach, and Array.prototype.indexOf. Example implementations are provided in ie8-polyfill.js; alternately, you may want to consider using es5-shim. All browsers with full ES5 support shouldn’t require any polyfills.
The intent is to have the same API as the client part of the Node HTTP module. The interfaces are the same wherever practical, although limitations in browsers make an exact clone of the Node API impossible.
This module implements http.request, http.get, and most of http.ClientRequest and http.IncomingMessage in addition to http.METHODS and http.STATUS_CODES. See the Node docs for how these work.
The message.url property provides access to the final URL after all redirects. This is useful since the browser follows all redirects silently, unlike Node. It is available in Chrome 37 and newer, Firefox 32 and newer, and Safari 9 and newer.
The options.withCredentials boolean flag, used to indicate if the browser should send cookies or authentication information with a CORS request. Default false.
This module has to make some tradeoffs to support binary data and/or streaming. Generally, the module can make a fairly good decision about which underlying browser features to use, but sometimes it helps to get a little input from the developer.
options.mode field passed into http.request or http.get can take on one of the following values:
undefined): Try to provide partial data before the request completes, but not at the cost of correctness for binary data or correctness of the ‘content-type’ response header. This mode will also avoid slower code paths whenever possible, which is particularly useful when making large requests in a browser like Safari that has a weaker JavaScript engine.options.requestTimeout allows setting a timeout in millisecionds for XHR and fetch (if supported by the browser). This is a limit on how long the entire process takes from beginning to end. Note that this is not the same as the node setTimeout functions, which apply to pauses in data transfer over the underlying socket, or the node timeout option, which applies to opening the connection.http.Agent is only a stubhttp.ClientRequest.request.setTimeout, that operate directly on the underlying socket.message.httpVersionmessage.rawHeaders is modified by the browser, and may not quite match what is sent by the server.message.trailers and message.rawTrailers will remain empty.timeout event/option and setTimeout functions, which operate on the underlying socket, are not available. However, see options.requestTimeout above.http.get('/bundle.js', function (res) {
var div = document.getElementById('result');
div.innerHTML += 'GET /beep<br>';
res.on('data', function (buf) {
div.innerHTML += buf;
});
res.on('end', function () {
div.innerHTML += '<br>__END__';
});
})There are two sets of tests: the tests that run in Node (found in test/node) and the tests that run in the browser (found in test/browser). Normally the browser tests run on Sauce Labs.
Running npm test will run both sets of tests, but in order for the Sauce Labs tests to run you will need to sign up for an account (free for open source projects) and put the credentials in a .zuulrc file.
To run just the Node tests, run npm run test-node.
To run the browser tests locally, run npm run test-browser-local and point your browser to http://localhost:8080/__zuul
A library to create readable "multipart/form-data" streams. Can be used to submit forms and file uploads to other web applications.
The API of this library is inspired by the XMLHttpRequest-2 FormData Interface.
npm install --save form-data
In this example we are constructing a form with 3 fields that contain a string, a buffer and a file stream.
var FormData = require('form-data');
var fs = require('fs');
var form = new FormData();
form.append('my_field', 'my value');
form.append('my_buffer', new Buffer(10));
form.append('my_file', fs.createReadStream('/foo/bar.jpg'));Also you can use http-response stream:
var FormData = require('form-data');
var http = require('http');
var form = new FormData();
http.request('http://nodejs.org/images/logo.png', function(response) {
form.append('my_field', 'my value');
form.append('my_buffer', new Buffer(10));
form.append('my_logo', response);
});Or @mikeal’s request stream:
var FormData = require('form-data');
var request = require('request');
var form = new FormData();
form.append('my_field', 'my value');
form.append('my_buffer', new Buffer(10));
form.append('my_logo', request('http://nodejs.org/images/logo.png'));In order to submit this form to a web application, call submit(url, [callback]) method:
form.submit('http://example.org/', function(err, res) {
// res – response object (http.IncomingMessage) //
res.resume();
});For more advanced request manipulations submit() method returns http.ClientRequest object, or you can choose from one of the alternative submission methods.
You can provide custom options, such as maxDataSize:
var FormData = require('form-data');
var form = new FormData({ maxDataSize: 20971520 });
form.append('my_field', 'my value');
form.append('my_buffer', /* something big */);List of available options could be found in combined-stream
You can use node’s http client interface:
var http = require('http');
var request = http.request({
method: 'post',
host: 'example.org',
path: '/upload',
headers: form.getHeaders()
});
form.pipe(request);
request.on('response', function(res) {
console.log(res.statusCode);
});Or if you would prefer the 'Content-Length' header to be set for you:
To use custom headers and pre-known length in parts:
var CRLF = '\r\n';
var form = new FormData();
var options = {
header: CRLF + '--' + form.getBoundary() + CRLF + 'X-Custom-Header: 123' + CRLF + CRLF,
knownLength: 1
};
form.append('my_buffer', buffer, options);
form.submit('http://example.com/', function(err, res) {
if (err) throw err;
console.log('Done');
});Form-Data can recognize and fetch all the required information from common types of streams (fs.readStream, http.response and mikeals request), for some other types of streams you’d need to provide “file”-related information manually:
someModule.stream(function(err, stdout, stderr) {
if (err) throw err;
var form = new FormData();
form.append('file', stdout, {
filename: 'unicycle.jpg', // ... or:
filepath: 'photos/toys/unicycle.jpg',
contentType: 'image/jpeg',
knownLength: 19806
});
form.submit('http://example.com/', function(err, res) {
if (err) throw err;
console.log('Done');
});
});The filepath property overrides filename and may contain a relative path. This is typically used when uploading multiple files from a directory.
For edge cases, like POST request to URL with query string or to pass HTTP auth credentials, object can be passed to form.submit() as first parameter:
form.submit({
host: 'example.com',
path: '/probably.php?extra=params',
auth: 'username:password'
}, function(err, res) {
console.log(res.statusCode);
});In case you need to also send custom HTTP headers with the POST request, you can use the headers key in first parameter of form.submit():
form.submit({
host: 'example.com',
path: '/surelynot.php',
headers: {'x-test-header': 'test-header-value'}
}, function(err, res) {
console.log(res.statusCode);
});Form submission using request:
var formData = {
my_field: 'my_value',
my_file: fs.createReadStream(__dirname + '/unicycle.jpg'),
};
request.post({url:'http://service.com/upload', formData: formData}, function(err, httpResponse, body) {
if (err) {
return console.error('upload failed:', err);
}
console.log('Upload successful! Server responded with:', body);
});For more details see request readme.
You can also submit a form using node-fetch:
var form = new FormData();
form.append('a', 1);
fetch('http://example.com', { method: 'POST', body: form })
.then(function(res) {
return res.json();
}).then(function(json) {
console.log(json);
});getLengthSync() method DOESN’T calculate length for streams, use knownLength options as workaround.2.x FormData has dropped support for node@0.10.x.A fully persistent red-black tree written 100% in JavaScript. Works both in node.js and in the browser via browserify.
Functional (or fully presistent) data structures allow for non-destructive updates. So if you insert an element into the tree, it returns a new tree with the inserted element rather than destructively updating the existing tree in place. Doing this requires using extra memory, and if one were naive it could cost as much as reallocating the entire tree. Instead, this data structure saves some memory by recycling references to previously allocated subtrees. This requires using only O(log(n)) additional memory per update instead of a full O(n) copy.
Some advantages of this is that it is possible to apply insertions and removals to the tree while still iterating over previous versions of the tree. Functional and persistent data structures can also be useful in many geometric algorithms like point location within triangulations or ray queries, and can be used to analyze the history of executing various algorithms. This added power though comes at a cost, since it is generally a bit slower to use a functional data structure than an imperative version. However, if your application needs this behavior then you may consider using this module.
npm install functional-red-black-tree
Here is an example of some basic usage:
//Load the library
var createTree = require("functional-red-black-tree")
//Create a tree
var t1 = createTree()
//Insert some items into the tree
var t2 = t1.insert(1, "foo")
var t3 = t2.insert(2, "bar")
//Remove something
var t4 = t3.remove(1)var tree = createTree([compare])Creates an empty functional tree
compare is an optional comparison function, same semantics as array.sort()Returns An empty tree ordered by compare
tree.keysA sorted array of all the keys in the tree
tree.valuesAn array array of all the values in the tree
tree.lengthThe number of items in the tree
tree.get(key)Retrieves the value associated to the given key
key is the key of the item to look upReturns The value of the first node associated to key
tree.insert(key, value)Creates a new tree with the new pair inserted.
key is the key of the item to insertvalue is the value of the item to insertReturns A new tree with key and value inserted
tree.remove(key)Removes the first item with key in the tree
key is the key of the item to removeReturns A new tree with the given item removed if it exists
tree.find(key)Returns an iterator pointing to the first item in the tree with key, otherwise null.
tree.ge(key)Find the first item in the tree whose key is >= key
key is the key to search forReturns An iterator at the given element.
tree.gt(key)Finds the first item in the tree whose key is > key
key is the key to search forReturns An iterator at the given element
tree.lt(key)Finds the last item in the tree whose key is < key
key is the key to search forReturns An iterator at the given element
tree.le(key)Finds the last item in the tree whose key is <= key
key is the key to search forReturns An iterator at the given element
tree.at(position)Finds an iterator starting at the given element
position is the index at which the iterator gets createdReturns An iterator starting at position
tree.beginAn iterator pointing to the first element in the tree
tree.endAn iterator pointing to the last element in the tree
tree.forEach(visitor(key,value)[, lo[, hi]])Walks a visitor function over the nodes of the tree in order.
visitor(key,value) is a callback that gets executed on each node. If a truthy value is returned from the visitor, then iteration is stopped.lo is an optional start of the range to visit (inclusive)hi is an optional end of the range to visit (non-inclusive)Returns The last value returned by the callback
tree.rootReturns the root node of the tree
Each node of the tree has the following properties:
node.keyThe key associated to the node
node.valueThe value associated to the node
node.leftThe left subtree of the node
node.rightThe right subtree of the node
iter.keyThe key of the item referenced by the iterator
iter.valueThe value of the item referenced by the iterator
iter.nodeThe value of the node at the iterator’s current position. null is iterator is node valid.
iter.treeThe tree associated to the iterator
iter.indexReturns the position of this iterator in the sequence.
iter.validChecks if the iterator is valid
iter.clone()Makes a copy of the iterator
iter.remove()Removes the item at the position of the iterator
Returns A new binary search tree with iter’s item removed
iter.update(value)Updates the value of the node in the tree at this iterator
Returns A new binary search tree with the corresponding node updated
iter.next()Advances the iterator to the next position
iter.prev()Moves the iterator backward one element
iter.hasNextIf true, then the iterator is not at the end of the sequence
iter.hasPrevIf true, then the iterator is not at the beginning of the sequence
Fast elliptic-curve cryptography in a plain javascript implementation.
NOTE: Please take a look at http://safecurves.cr.yp.to/ before choosing a curve for your cryptography operations.
ECC is much slower than regular RSA cryptography, the JS implementations are even more slower.
$ node benchmarks/index.js
Benchmarking: sign
elliptic#sign x 262 ops/sec ±0.51% (177 runs sampled)
eccjs#sign x 55.91 ops/sec ±0.90% (144 runs sampled)
------------------------
Fastest is elliptic#sign
========================
Benchmarking: verify
elliptic#verify x 113 ops/sec ±0.50% (166 runs sampled)
eccjs#verify x 48.56 ops/sec ±0.36% (125 runs sampled)
------------------------
Fastest is elliptic#verify
========================
Benchmarking: gen
elliptic#gen x 294 ops/sec ±0.43% (176 runs sampled)
eccjs#gen x 62.25 ops/sec ±0.63% (129 runs sampled)
------------------------
Fastest is elliptic#gen
========================
Benchmarking: ecdh
elliptic#ecdh x 136 ops/sec ±0.85% (156 runs sampled)
------------------------
Fastest is elliptic#ecdh
========================var EC = require('elliptic').ec;
// Create and initialize EC context
// (better do it once and reuse it)
var ec = new EC('secp256k1');
// Generate keys
var key = ec.genKeyPair();
// Sign the message's hash (input must be an array, or a hex-string)
var msgHash = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
var signature = key.sign(msgHash);
// Export DER encoded signature in Array
var derSign = signature.toDER();
// Verify signature
console.log(key.verify(msgHash, derSign));
// CHECK WITH NO PRIVATE KEY
var pubPoint = key.getPublic();
var x = pubPoint.getX();
var y = pubPoint.getY();
// Public Key MUST be either:
// 1) '04' + hex string of x + hex string of y; or
// 2) object with two hex string properties (x and y); or
// 3) object with two buffer properties (x and y)
var pub = pubPoint.encode('hex'); // case 1
var pub = { x: x.toString('hex'), y: y.toString('hex') }; // case 2
var pub = { x: x.toBuffer(), y: y.toBuffer() }; // case 3
var pub = { x: x.toArrayLike(Buffer), y: y.toArrayLike(Buffer) }; // case 3
// Import public key
var key = ec.keyFromPublic(pub, 'hex');
// Signature MUST be either:
// 1) DER-encoded signature as hex-string; or
// 2) DER-encoded signature as buffer; or
// 3) object with two hex-string properties (r and s); or
// 4) object with two buffer properties (r and s)
var signature = '3046022100...'; // case 1
var signature = new Buffer('...'); // case 2
var signature = { r: 'b1fc...', s: '9c42...' }; // case 3
// Verify signature
console.log(key.verify(msgHash, signature));var EdDSA = require('elliptic').eddsa;
// Create and initialize EdDSA context
// (better do it once and reuse it)
var ec = new EdDSA('ed25519');
// Create key pair from secret
var key = ec.keyFromSecret('693e3c...'); // hex string, array or Buffer
// Sign the message's hash (input must be an array, or a hex-string)
var msgHash = [ 0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10 ];
var signature = key.sign(msgHash).toHex();
// Verify signature
console.log(key.verify(msgHash, signature));
// CHECK WITH NO PRIVATE KEY
// Import public key
var pub = '0a1af638...';
var key = ec.keyFromPublic(pub, 'hex');
// Verify signature
var signature = '70bed1...';
console.log(key.verify(msgHash, signature));var EC = require('elliptic').ec;
var ec = new EC('curve25519');
// Generate keys
var key1 = ec.genKeyPair();
var key2 = ec.genKeyPair();
var shared1 = key1.derive(key2.getPublic());
var shared2 = key2.derive(key1.getPublic());
console.log('Both shared secrets are BN instances');
console.log(shared1.toString(16));
console.log(shared2.toString(16));three and more members:
var EC = require('elliptic').ec;
var ec = new EC('curve25519');
var A = ec.genKeyPair();
var B = ec.genKeyPair();
var C = ec.genKeyPair();
var AB = A.getPublic().mul(B.getPrivate())
var BC = B.getPublic().mul(C.getPrivate())
var CA = C.getPublic().mul(A.getPrivate())
var ABC = AB.mul(C.getPrivate())
var BCA = BC.mul(A.getPrivate())
var CAB = CA.mul(B.getPrivate())
console.log(ABC.getX().toString(16))
console.log(BCA.getX().toString(16))
console.log(CAB.getX().toString(16))NOTE: .derive() returns a BN instance.
Elliptic.js support following curve types:
Following curve ‘presets’ are embedded into the library:
secp256k1p192p224p256p384p521curve25519ed25519NOTE: That curve25519 could not be used for ECDSA, use ed25519 instead.
ECDSA is using deterministic k value generation as per RFC6979. Most of the curve operations are performed on non-affine coordinates (either projective or extended), various windowing techniques are used for different cases.
All operations are performed in reduction context using bn.js, hashing is provided by hash.js
elliptic for browser and secp256k1-node for node)
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiveThe only available postfix at the moment is:
n - which means that the argument of the function must be a plain JavaScript Number. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instancea.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiation
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiveThe only available postfix at the moment is:
n - which means that the argument of the function must be a plain JavaScript Number. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instancea.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiation
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiveThe only available postfix at the moment is:
n - which means that the argument of the function must be a plain JavaScript Number. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instancea.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiation
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiveThe only available postfix at the moment is:
n - which means that the argument of the function must be a plain JavaScript Number. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instancea.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiation
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiveThe only available postfix at the moment is:
n - which means that the argument of the function must be a plain JavaScript Number. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instancea.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiation
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiveThe only available postfix at the moment is:
n - which means that the argument of the function must be a plain JavaScript Number. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instancea.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiation
normalize-package-data exports a function that normalizes package metadata. This data is typically found in a package.json file, but in principle could come from any source - for example the npm registry.
normalize-package-data is used by read-package-json to normalize the data it reads from a package.json file. In turn, read-package-json is used by npm and various npm-related tools.
npm install normalize-package-data
Basic usage is really simple. You call the function that normalize-package-data exports. Let’s call it normalizeData.
normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData)
// packageData is now normalizedYou may activate strict validation by passing true as the second argument.
normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData, true)
// packageData is now normalizedIf strict mode is activated, only Semver 2.0 version strings are accepted. Otherwise, Semver 1.0 strings are accepted as well. Packages must have a name, and the name field must not have contain leading or trailing whitespace.
Optionally, you may pass a “warning” function. It gets called whenever the normalizeData function encounters something that doesn’t look right. It indicates less than perfect input data.
normalizeData = require('normalize-package-data')
packageData = require("./package.json")
warnFn = function(msg) { console.error(msg) }
normalizeData(packageData, warnFn)
// packageData is now normalized. Any number of warnings may have been logged.You may combine strict validation with warnings by passing true as the second argument, and warnFn as third.
When private field is set to true, warnings will be suppressed.
If the supplied data has an invalid name or version vield, normalizeData will throw an error. Depending on where you call normalizeData, you may want to catch these errors so can pass them to a callback.
name field gets trimmed (unless in strict mode).version field gets cleaned by semver.clean. See documentation for the semver module.name and/or version fields are missing, they are set to empty strings.files field is not an array, it will be removed.bin field is a string, then bin field will become an object with name set to the value of the name field, and bin set to the original string value.man field is a string, it will become an array with the original string as its sole member.keywords field is string, it is considered to be a list of keywords separated by one or more white-space characters. It gets converted to an array by splitting on \s+.author, maintainers, contributors) get converted into objects with name, email and url properties.bundledDependencies field (a typo) exists and bundleDependencies field does not, bundledDependencies will get renamed to bundleDependencies.dependencies, devDependencies, optionalDependencies) is a string, it gets converted into an object with familiar name=>value pairs.optionalDependencies get added to dependencies. The optionalDependencies array is left untouched.org/proj, github:org/proj, bitbucket:org/proj, gitlab:org/proj, gist:docid) will have the shortcut left in place. (In the case of github, the org/proj form will be expanded to github:org/proj.) THIS MARKS A BREAKING CHANGE FROM V1, where the shorcut was previously expanded to a URL.description field does not exist, but readme field does, then (more or less) the first paragraph of text that’s found in the readme is taken as value for description.repository field is a string, it will become an object with url set to the original string value, and type set to "git".repository.url is not a valid url, but in the style of “[owner-name]/[repo-name]”, repository.url will be set to git+https://github.com/[owner-name]/[repo-name].gitbugs field is a string, the value of bugs field is changed into an object with url set to the original string value.bugs field does not exist, but repository field points to a repository hosted on GitHub, the value of the bugs field gets set to an url in the form of https://github.com/[owner-name]/[repo-name]/issues . If the repository field points to a GitHub Gist repo url, the associated http url is chosen.bugs field is an object, the resulting value only has email and url properties. If email and url properties are not strings, they are ignored. If no valid values for either email or url is found, bugs field will be removed.homepage field is not a string, it will be removed.homepage field does not specify a protocol, then http is assumed. For example, myproject.org will be changed to http://myproject.org.homepage field does not exist, but repository field points to a repository hosted on GitHub, the value of the homepage field gets set to an url in the form of https://github.com/[owner-name]/[repo-name]#readme . If the repository field points to a GitHub Gist repo url, the associated http url is chosen.If name field is given, the value of the name field must be a string. The string may not:
/@\s+%node_modules or favicon.ico (case doesn’t matter).If version field is given, the value of the version field must be a valid semver string, as determined by the semver.valid method. See documentation for the semver module.
This package contains code based on read-package-json written by Isaac Z. Schlueter. Used with permisson.

normalize-package-data exports a function that normalizes package metadata. This data is typically found in a package.json file, but in principle could come from any source - for example the npm registry.
normalize-package-data is used by read-package-json to normalize the data it reads from a package.json file. In turn, read-package-json is used by npm and various npm-related tools.
npm install normalize-package-data
Basic usage is really simple. You call the function that normalize-package-data exports. Let’s call it normalizeData.
normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData)
// packageData is now normalizedYou may activate strict validation by passing true as the second argument.
normalizeData = require('normalize-package-data')
packageData = require("./package.json")
normalizeData(packageData, true)
// packageData is now normalizedIf strict mode is activated, only Semver 2.0 version strings are accepted. Otherwise, Semver 1.0 strings are accepted as well. Packages must have a name, and the name field must not have contain leading or trailing whitespace.
Optionally, you may pass a “warning” function. It gets called whenever the normalizeData function encounters something that doesn’t look right. It indicates less than perfect input data.
normalizeData = require('normalize-package-data')
packageData = require("./package.json")
warnFn = function(msg) { console.error(msg) }
normalizeData(packageData, warnFn)
// packageData is now normalized. Any number of warnings may have been logged.You may combine strict validation with warnings by passing true as the second argument, and warnFn as third.
When private field is set to true, warnings will be suppressed.
If the supplied data has an invalid name or version vield, normalizeData will throw an error. Depending on where you call normalizeData, you may want to catch these errors so can pass them to a callback.
name field gets trimmed (unless in strict mode).version field gets cleaned by semver.clean. See documentation for the semver module.name and/or version fields are missing, they are set to empty strings.files field is not an array, it will be removed.bin field is a string, then bin field will become an object with name set to the value of the name field, and bin set to the original string value.man field is a string, it will become an array with the original string as its sole member.keywords field is string, it is considered to be a list of keywords separated by one or more white-space characters. It gets converted to an array by splitting on \s+.author, maintainers, contributors) get converted into objects with name, email and url properties.bundledDependencies field (a typo) exists and bundleDependencies field does not, bundledDependencies will get renamed to bundleDependencies.dependencies, devDependencies, optionalDependencies) is a string, it gets converted into an object with familiar name=>value pairs.optionalDependencies get added to dependencies. The optionalDependencies array is left untouched.org/proj, github:org/proj, bitbucket:org/proj, gitlab:org/proj, gist:docid) will have the shortcut left in place. (In the case of github, the org/proj form will be expanded to github:org/proj.) THIS MARKS A BREAKING CHANGE FROM V1, where the shorcut was previously expanded to a URL.description field does not exist, but readme field does, then (more or less) the first paragraph of text that’s found in the readme is taken as value for description.repository field is a string, it will become an object with url set to the original string value, and type set to "git".repository.url is not a valid url, but in the style of “[owner-name]/[repo-name]”, repository.url will be set to git+https://github.com/[owner-name]/[repo-name].gitbugs field is a string, the value of bugs field is changed into an object with url set to the original string value.bugs field does not exist, but repository field points to a repository hosted on GitHub, the value of the bugs field gets set to an url in the form of https://github.com/[owner-name]/[repo-name]/issues . If the repository field points to a GitHub Gist repo url, the associated http url is chosen.bugs field is an object, the resulting value only has email and url properties. If email and url properties are not strings, they are ignored. If no valid values for either email or url is found, bugs field will be removed.homepage field is not a string, it will be removed.homepage field does not specify a protocol, then http is assumed. For example, myproject.org will be changed to http://myproject.org.homepage field does not exist, but repository field points to a repository hosted on GitHub, the value of the homepage field gets set to an url in the form of https://github.com/[owner-name]/[repo-name]#readme . If the repository field points to a GitHub Gist repo url, the associated http url is chosen.If name field is given, the value of the name field must be a string. The string may not:
/@\s+%node_modules or favicon.ico (case doesn’t matter).If version field is given, the value of the version field must be a valid semver string, as determined by the semver.valid method. See documentation for the semver module.
This package contains code based on read-package-json written by Isaac Z. Schlueter. Used with permisson.
Doctrine is a JSDoc parser that parses documentation comments from JavaScript (you need to pass in the comment, not a whole JavaScript file).
You can install Doctrine using npm:
npm install doctrine --save-dev
Doctrine can also be used in web browsers using Browserify.
Require doctrine inside of your JavaScript:
The primary method is parse(), which accepts two arguments: the JSDoc comment to parse and an optional options object. The available options are:
unwrap - set to true to delete the leading /**, any * that begins a line, and the trailing */ from the source text. Default: false.tags - an array of tags to return. When specified, Doctrine returns only tags in this array. For example, if tags is ["param"], then only @param tags will be returned. Default: null.recoverable - set to true to keep parsing even when syntax errors occur. Default: false.sloppy - set to true to allow optional parameters to be specified in brackets (@param {string} [foo]). Default: false.lineNumbers - set to true to add lineNumber to each node, specifying the line on which the node is found in the source. Default: false.Here’s a simple example:
var ast = doctrine.parse(
[
"/**",
" * This function comment is parsed by doctrine",
" * @param {{ok:String}} userName",
"*/"
].join('\n'), { unwrap: true });This example returns the following AST:
{ “description”: “This function comment is parsed by doctrine”, “tags”: [ { “title”: “param”, “description”: null, “type”: { “type”: “RecordType”, “fields”: [ { “type”: “FieldType”, “key”: “ok”, “value”: { “type”: “NameExpression”, “name”: “String” } } ] }, “name”: “userName” } ] }
See the demo page more detail.
These folks keep the project moving and are resources for help:
Issues and pull requests will be triaged and responded to as quickly as possible. We operate under the ESLint Contributor Guidelines, so please be sure to read them before contributing. If you’re not sure where to dig in, check out the issues.
No. Doctrine can only parse JSDoc comments, so you’ll need to pass just the JSDoc comment to Doctrine in order to work.
some of functions is derived from esprima
some of extensions is derived from closure-compiler
Join our Chatroom
Returns
trueif the given string looks like a glob pattern or an extglob pattern. This makes it easy to create code that only uses external modules like node-glob when necessary, resulting in much faster code execution and initialization time, and a better user experience.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
You might also be interested in is-valid-glob and has-glob.
True
Patterns that have glob characters or regex patterns will return true:
isGlob('!foo.js');
isGlob('*.js');
isGlob('**/abc.js');
isGlob('abc/*.js');
isGlob('abc/(aaa|bbb).js');
isGlob('abc/[a-z].js');
isGlob('abc/{a,b}.js');
//=> trueExtglobs
isGlob('abc/@(a).js');
isGlob('abc/!(a).js');
isGlob('abc/+(a).js');
isGlob('abc/*(a).js');
isGlob('abc/?(a).js');
//=> trueFalse
Escaped globs or extglobs return false:
isGlob('abc/\\@(a).js');
isGlob('abc/\\!(a).js');
isGlob('abc/\\+(a).js');
isGlob('abc/\\*(a).js');
isGlob('abc/\\?(a).js');
isGlob('\\!foo.js');
isGlob('\\*.js');
isGlob('\\*\\*/abc.js');
isGlob('abc/\\*.js');
isGlob('abc/\\(aaa|bbb).js');
isGlob('abc/\\[a-z].js');
isGlob('abc/\\{a,b}.js');
//=> falsePatterns that do not have glob patterns return false:
isGlob('abc.js');
isGlob('abc/def/ghi.js');
isGlob('foo.js');
isGlob('abc/@.js');
isGlob('abc/+.js');
isGlob('abc/?.js');
isGlob();
isGlob(null);
//=> falseArrays are also false (If you want to check if an array has a glob pattern, use has-glob):
When options.strict === false the behavior is less strict in determining if a pattern is a glob. Meaning that some patterns that would return false may return true. This is done so that matching libraries like micromatch have a chance at determining if the pattern is a glob or not.
True
Patterns that have glob characters or regex patterns will return true:
isGlob('!foo.js', {strict: false});
isGlob('*.js', {strict: false});
isGlob('**/abc.js', {strict: false});
isGlob('abc/*.js', {strict: false});
isGlob('abc/(aaa|bbb).js', {strict: false});
isGlob('abc/[a-z].js', {strict: false});
isGlob('abc/{a,b}.js', {strict: false});
//=> trueExtglobs
isGlob('abc/@(a).js', {strict: false});
isGlob('abc/!(a).js', {strict: false});
isGlob('abc/+(a).js', {strict: false});
isGlob('abc/*(a).js', {strict: false});
isGlob('abc/?(a).js', {strict: false});
//=> trueFalse
Escaped globs or extglobs return false:
isGlob('\\!foo.js', {strict: false});
isGlob('\\*.js', {strict: false});
isGlob('\\*\\*/abc.js', {strict: false});
isGlob('abc/\\*.js', {strict: false});
isGlob('abc/\\(aaa|bbb).js', {strict: false});
isGlob('abc/\\[a-z].js', {strict: false});
isGlob('abc/\\{a,b}.js', {strict: false});
//=> falseContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 47 | jonschlinkert |
| 5 | doowb |
| 1 | phated |
| 1 | danhper |
| 1 | paulmillr |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.8.0, on March 27, 2019. Overview ========
A regex that tokenizes JavaScript.
var jsTokens = require("js-tokens").default
var jsString = "var foo=opts.foo;\n..."
jsString.match(jsTokens)
// ["var", " ", "foo", "=", "opts", ".", "foo", ";", "\n", ...]npm install js-tokens
jsTokensA regex with the g flag that matches JavaScript tokens.
The regex always matches, even invalid JavaScript and the empty string.
The next match is always directly after the previous.
var token = matchToToken(match)Takes a match returned by jsTokens.exec(string), and returns a {type: String, value: String} object. The following types are available:
Multi-line comments and strings also have a closed property indicating if the token was closed or not (see below).
Comments and strings both come in several flavors. To distinguish them, check if the token starts with //, /*, , or .
Names are ECMAScript IdentifierNames, that is, including both identifiers and keywords. You may use is-keyword-js to tell them apart.
Whitespace includes both line terminators and other whitespace.
The intention is to always support the latest ECMAScript version whose feature set has been finalized.
If adding support for a newer version requires changes, a new version with a major verion bump will be released.
Currently, ECMAScript 2018 is supported.
Unterminated strings are still matched as strings. JavaScript strings cannot contain (unescaped) newlines, so unterminated strings simply end at the end of the line. Unterminated template strings can contain unescaped newlines, though, so they go on to the end of input.
Unterminated multi-line comments are also still matched as comments. They simply go on to the end of the input.
Unterminated regex literals are likely matched as division and whatever is inside the regex.
Invalid ASCII characters have their own capturing group.
Invalid non-ASCII characters are treated as names, to simplify the matching of names (except unicode spaces which are treated as whitespace). Note: See also the ES2018 section.
Regex literals may contain invalid regex syntax. They are still matched as regex literals. They may also contain repeated regex flags, to keep the regex simple.
Strings may contain invalid escape sequences.
Tokenizing JavaScript using regexes—in fact, one single regex—won’t be perfect. But that’s not the point either.
You may compare jsTokens with esprima by using esprima-compare.js. See npm run esprima-compare!
Template strings are matched as single tokens, from the starting ` to the ending `, including interpolations (whose tokens are not matched individually).
Matching template string interpolations requires recursive balancing of { and }—something that JavaScript regexes cannot do. Only one level of nesting is supported.
Consider this example:
A human can easily understand that in the number line we’re dealing with division, and in the regex line we’re dealing with a regex literal. How come? Because humans can look at the whole code to put the / characters in context. A JavaScript regex cannot. It only sees forwards. (Well, ES2018 regexes can also look backwards. See the ES2018 section).
When the jsTokens regex scans throught the above, it will see the following at the end of both the number and regex rows:
It is then impossible to know if that is a regex literal, or part of an expression dealing with division.
Here is a similar case:
The first line divides the foo variable with 2/g. The second line calls the foo function with the regex literal /= 2/g. Again, since jsTokens only sees forwards, it cannot tell the two cases apart.
There are some cases where we can tell division and regex literals apart, though.
First off, we have the simple cases where there’s only one slash in the line:
Regex literals cannot contain newlines, so the above cases are correctly identified as division. Things are only problematic when there are more than one non-comment slash in a single line.
Secondly, not every character is a valid regex flag.
The above example is also correctly identified as division, because e is not a valid regex flag. I initially wanted to future-proof by allowing [a-zA-Z]* (any letter) as flags, but it is not worth it since it increases the amount of ambigous cases. So only the standard g, m, i, y and u flags are allowed. This means that the above example will be identified as division as long as you don’t rename the e variable to some permutation of gmiyus 1 to 6 characters long.
Lastly, we can look forward for information.
+, *, && and ==), but division could likely be part of such an expression.Please consult the regex source and the test cases for precise information on when regex or division is matched (should you need to know). In short, you could sum it up as:
If the end of a statement looks like a regex literal (even if it isn’t), it will be treated as one. Otherwise it should work as expected (if you write sane code).
ES2018 added some nice regex improvements to the language.
These things would be nice to do, but are not critical. They probably have to wait until the oldest maintained Node.js LTS release supports those features.
Fill in a range of numbers or letters, optionally passing an increment or
stepto use, or create a regex-compatible range withoptions.toRegex
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Expands numbers and letters, optionally using a step as the last argument. (Numbers may be defined as JavaScript numbers or strings).
const fill = require('fill-range');
// fill(from, to[, step, options]);
console.log(fill('1', '10')); //=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10']
console.log(fill('1', '10', { toRegex: true })); //=> [1-9]|10Params
from: {String|Number} the number or letter to start withto: {String|Number} the number or letter to end withstep: {String|Number|Object|Function} Optionally pass a step to use.options: {Object|Function}: See all available optionsBy default, an array of values is returned.
Alphabetical ranges
console.log(fill('a', 'e')); //=> ['a', 'b', 'c', 'd', 'e']
console.log(fill('A', 'E')); //=> [ 'A', 'B', 'C', 'D', 'E' ]Numerical ranges
Numbers can be defined as actual numbers or strings.
Negative ranges
Numbers can be defined as actual numbers or strings.
console.log(fill('-5', '-1')); //=> [ '-5', '-4', '-3', '-2', '-1' ]
console.log(fill('-5', '5')); //=> [ '-5', '-4', '-3', '-2', '-1', '0', '1', '2', '3', '4', '5' ]Steps (increments)
// numerical ranges with increments
console.log(fill('0', '25', 4)); //=> [ '0', '4', '8', '12', '16', '20', '24' ]
console.log(fill('0', '25', 5)); //=> [ '0', '5', '10', '15', '20', '25' ]
console.log(fill('0', '25', 6)); //=> [ '0', '6', '12', '18', '24' ]
// alphabetical ranges with increments
console.log(fill('a', 'z', 4)); //=> [ 'a', 'e', 'i', 'm', 'q', 'u', 'y' ]
console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ]
console.log(fill('a', 'z', 6)); //=> [ 'a', 'g', 'm', 's', 'y' ]Type: number (formatted as a string or number)
Default: undefined
Description: The increment to use for the range. Can be used with letters or numbers.
Example(s)
// numbers
console.log(fill('1', '10', 2)); //=> [ '1', '3', '5', '7', '9' ]
console.log(fill('1', '10', 3)); //=> [ '1', '4', '7', '10' ]
console.log(fill('1', '10', 4)); //=> [ '1', '5', '9' ]
// letters
console.log(fill('a', 'z', 5)); //=> [ 'a', 'f', 'k', 'p', 'u', 'z' ]
console.log(fill('a', 'z', 7)); //=> [ 'a', 'h', 'o', 'v' ]
console.log(fill('a', 'z', 9)); //=> [ 'a', 'j', 's' ]Type: boolean
Default: false
Description: By default, null is returned when an invalid range is passed. Enable this option to throw a RangeError on invalid ranges.
Example(s)
The following are all invalid:
fill('1.1', '2'); // decimals not supported in ranges
fill('a', '2'); // incompatible range values
fill(1, 10, 'foo'); // invalid "step" argumentType: boolean
Default: undefined
Description: Cast all returned values to strings. By default, integers are returned as numbers.
Example(s)
console.log(fill(1, 5)); //=> [ 1, 2, 3, 4, 5 ]
console.log(fill(1, 5, { stringify: true })); //=> [ '1', '2', '3', '4', '5' ]Type: boolean
Default: undefined
Description: Create a regex-compatible source string, instead of expanding values to an array.
Example(s)
// alphabetical range
console.log(fill('a', 'e', { toRegex: true })); //=> '[a-e]'
// alphabetical with step
console.log(fill('a', 'z', 3, { toRegex: true })); //=> 'a|d|g|j|m|p|s|v|y'
// numerical range
console.log(fill('1', '100', { toRegex: true })); //=> '[1-9]|[1-9][0-9]|100'
// numerical range with zero padding
console.log(fill('000001', '100000', { toRegex: true }));
//=> '0{5}[1-9]|0{4}[1-9][0-9]|0{3}[1-9][0-9]{2}|0{2}[1-9][0-9]{3}|0[1-9][0-9]{4}|100000'Type: function
Default: undefined
Description: Customize each value in the returned array (or string). (you can also pass this function as the last argument to fill()).
Example(s)
// add zero padding
console.log(fill(1, 5, value => String(value).padStart(4, '0')));
//=> ['0001', '0002', '0003', '0004', '0005']Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
| Commits | Contributor |
|---|---|
| 116 | jonschlinkert |
| 4 | paulmillr |
| 2 | realityking |
| 2 | bluelovers |
| 1 | edorivai |
| 1 | wtgtybhertgeghgtwtg |
Jon Schlinkert
Please consider supporting me on Patreon, or start your own Patreon page!
This file was generated by verb-generate-readme, v0.8.0, on April 08, 2019. # 
BigNum in pure javascript
npm install --save bn.js
const BN = require('bn.js');
var a = new BN('dead', 16);
var b = new BN('101010', 2);
var res = a.add(b);
console.log(res.toString(10)); // 57047Note: decimals are not supported in this library.
There are several prefixes to instructions that affect the way the work. Here is the list of them in the order of appearance in the function name:
i - perform operation in-place, storing the result in the host object (on which the method was invoked). Might be used to avoid number allocation costsu - unsigned, ignore the sign of operands when performing operation, or always return positive value. Second case applies to reduction operations like mod(). In such cases if the result will be negative - modulo will be added to the result to make it positiven - the argument of the function must be a plain JavaScript Number. Decimals are not supported.rn - both argument and return value of the function are plain JavaScript Numbers. Decimals are not supported.a.iadd(b) - perform addition on a and b, storing the result in aa.umod(b) - reduce a modulo b, returning positive valuea.iushln(13) - shift bits of a left by 13Prefixes/postfixes are put in parens at the of the line. endian - could be either le (little-endian) or be (big-endian).
a.clone() - clone numbera.toString(base, length) - convert to base-string and pad with zeroesa.toNumber() - convert to Javascript Number (limited to 53 bits)a.toJSON() - convert to JSON compatible hex string (alias of toString(16))a.toArray(endian, length) - convert to byte Array, and optionally zero pad to length, throwing if already exceedinga.toArrayLike(type, endian, length) - convert to an instance of type, which must behave like an Arraya.toBuffer(endian, length) - convert to Node.js Buffer (if available). For compatibility with browserify and similar tools, use this instead: a.toArrayLike(Buffer, endian, length)a.bitLength() - get number of bits occupieda.zeroBits() - return number of less-significant consequent zero bits (example: 1010000 has 4 zero bits)a.byteLength() - return number of bytes occupieda.isNeg() - true if the number is negativea.isEven() - no commentsa.isOdd() - no commentsa.isZero() - no commentsa.cmp(b) - compare numbers and return -1 (a < b), 0 (a == b), or 1 (a > b) depending on the comparison result (ucmp, cmpn)a.lt(b) - a less than b (n)a.lte(b) - a less than or equals b (n)a.gt(b) - a greater than b (n)a.gte(b) - a greater than or equals b (n)a.eq(b) - a equals b (n)a.toTwos(width) - convert to two’s complement representation, where width is bit widtha.fromTwos(width) - convert from two’s complement representation, where width is the bit widthBN.isBN(object) - returns true if the supplied object is a BN.js instanceBN.max(a, b) - return a if a bigger than bBN.min(a, b) - return a if a less than ba.neg() - negate sign (i)a.abs() - absolute value (i)a.add(b) - addition (i, n, in)a.sub(b) - subtraction (i, n, in)a.mul(b) - multiply (i, n, in)a.sqr() - square (i)a.pow(b) - raise a to the power of ba.div(b) - divide (divn, idivn)a.mod(b) - reduct (u, n) (but no umodn)a.divRound(b) - rounded divisiona.or(b) - or (i, u, iu)a.and(b) - and (i, u, iu, andln) (NOTE: andln is going to be replaced with andn in future)a.xor(b) - xor (i, u, iu)a.setn(b) - set specified bit to 1a.shln(b) - shift left (i, u, iu)a.shrn(b) - shift right (i, u, iu)a.testn(b) - test if specified bit is seta.maskn(b) - clear bits with indexes higher or equal to b (i)a.bincn(b) - add 1 << b to the numbera.notn(w) - not (for the width specified by w) (i)a.gcd(b) - GCDa.egcd(b) - Extended GCD results ({ a: ..., b: ..., gcd: ... })a.invm(b) - inverse a modulo bWhen doing lots of reductions using the same modulo, it might be beneficial to use some tricks: like Montgomery multiplication, or using special algorithm for Mersenne Prime.
To enable this tricks one should create a reduction context:
where num is just a BN instance.
Or:
Where primeName is either of these Mersenne Primes:
'k256''p224''p192''p25519'Or:
To reduce numbers with Montgomery trick. .mont() is generally faster than .red(num), but slower than BN.red(primeName).
Before performing anything in reduction context - numbers should be converted to it. Usually, this means that one should:
Here is how one may convert numbers to red:
Where red is a reduction context created using instructions above
Here is how to convert them back:
Most of the instructions from the very start of this readme have their counterparts in red context:
a.redAdd(b), a.redIAdd(b)a.redSub(b), a.redISub(b)a.redShl(num)a.redMul(b), a.redIMul(b)a.redSqr(), a.redISqr()a.redSqrt() - square root modulo reduction context’s primea.redInvm() - modular inverse of the numbera.redNeg()a.redPow(b) - modular exponentiationOptimized for elliptic curves that work with 256-bit numbers. There is no limitation on the size of the numbers.
Wrap native HTTP requests with RFC compliant cache support
RFC 7234 compliant HTTP caching for native Node.js HTTP/HTTPS requests. Caching works out of the box in memory or is easily pluggable with a wide range of storage adapters.
Note: This is a low level wrapper around the core HTTP modules, it’s not a high level request library.
If-None-Match/If-Modified-Since headersAge header on cached responsesnpm install cacheable-request
const http = require('http');
const CacheableRequest = require('cacheable-request');
// Then instead of
const req = http.request('http://example.com', cb);
req.end();
// You can do
const cacheableRequest = new CacheableRequest(http.request);
const cacheReq = cacheableRequest('http://example.com', cb);
cacheReq.on('request', req => req.end());
// Future requests to 'example.com' will be returned from cache if still valid
// You pass in any other http.request API compatible method to be wrapped with cache support:
const cacheableRequest = new CacheableRequest(https.request);
const cacheableRequest = new CacheableRequest(electron.net);cacheable-request uses Keyv to support a wide range of storage adapters.
For example, to use Redis as a cache backend, you just need to install the official Redis Keyv storage adapter:
npm install @keyv/redis
And then you can pass CacheableRequest your connection string:
View all official Keyv storage adapters.
Keyv also supports anything that follows the Map API so it’s easy to write your own storage adapter or use a third-party solution.
e.g The following are all valid storage adapters
const storageAdapter = new Map();
// or
const storageAdapter = require('./my-storage-adapter');
// or
const QuickLRU = require('quick-lru');
const storageAdapter = new QuickLRU({ maxSize: 1000 });
const cacheableRequest = new CacheableRequest(http.request, storageAdapter);View the Keyv docs for more information on how to use storage adapters.
Returns the provided request function wrapped with cache support.
Type: function
Request function to wrap with cache support. Should be http.request or a similar API compatible request function.
Type: Keyv storage adapter
Default: new Map()
A Keyv storage adapter instance, or connection string if using with an official Keyv storage adapter.
Returns an event emitter.
Type: object, string
http-cache-semantics options.Type: boolean
Default: true
If the cache should be used. Setting this to false will completely bypass the cache for the current request.
Type: boolean
Default: false
If set to true once a cached resource has expired it is deleted and will have to be re-requested.
If set to false (default), after a cached resource’s TTL expires it is kept in the cache and will be revalidated on the next request with If-None-Match/If-Modified-Since headers.
Type: number
Default: undefined
Limits TTL. The number represents milliseconds.
Type: boolean
Default: false
When set to true, if the DB connection fails we will automatically fallback to a network request. DB errors will still be emitted to notify you of the problem even though the request callback may succeed.
Type: boolean
Default: false
Forces refreshing the cache. If the response could be retrieved from the cache, it will perform a new request and override the cache instead.
Type: function
The callback function which will receive the response as an argument.
The response can be either a Node.js HTTP response stream or a responselike object. The response will also have a fromCache property set with a boolean value.
request event to get the request object of the request.
Note: This event will only fire if an HTTP request is actually made, not when a response is retrieved from cache. However, you should always handle the request event to end the request and handle any potential request errors.
response event to get the response object from the HTTP request or cache.
error event emitted in case of an error with the cache.
Errors emitted here will be an instance of CacheableRequest.RequestError or CacheableRequest.CacheError. You will only ever receive a RequestError if the request function throws (normally caused by invalid user input). Normal request errors should be handled inside the request event.
To properly handle all error scenarios you should use the following pattern:
cacheableRequest('example.com', cb)
.on('error', err => {
if (err instanceof CacheableRequest.CacheError) {
handleCacheError(err); // Cache error
} else if (err instanceof CacheableRequest.RequestError) {
handleRequestError(err); // Request function thrown
}
})
.on('request', req => {
req.on('error', handleRequestError); // Request error emitted
req.end();
});Note: Database connection errors are emitted here, however cacheable-request will attempt to re-request the resource and bypass the cache on a connection error. Therefore a database connection error doesn’t necessarily mean the request won’t be fulfilled.
Minimal async jobs utility library, with streams support.
AsyncKit provides harness for parallel and serial iterators over list of items represented by arrays or objects. Optionally it accepts abort function (should be synchronously return by iterator for each item), and terminates left over jobs upon an error event. For specific iteration order built-in (ascending and descending) and custom sort helpers also supported, via asynckit.serialOrdered method.
It ensures async operations to keep behavior more stable and prevent Maximum call stack size exceeded errors, from sync iterators.
| compression | size |
|---|---|
| asynckit.js | 12.34 kB |
| asynckit.min.js | 4.11 kB |
| asynckit.min.js.gz | 1.47 kB |
Runs iterator over provided array in parallel. Stores output in the result array, on the matching positions. In unlikely event of an error from one of the jobs, will terminate rest of the active jobs (if abort function is provided) and return error along with salvaged data to the main callback function.
var parallel = require('asynckit').parallel
, assert = require('assert')
;
var source = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
, expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ]
, expectedTarget = [ 1, 1, 2, 4, 8, 16, 32, 64 ]
, target = []
;
parallel(source, asyncJob, function(err, result)
{
assert.deepEqual(result, expectedResult);
assert.deepEqual(target, expectedTarget);
});
// async job accepts one element from the array
// and a callback function
function asyncJob(item, cb)
{
// different delays (in ms) per item
var delay = item * 25;
// pretend different jobs take different time to finish
// and not in consequential order
var timeoutId = setTimeout(function() {
target.push(item);
cb(null, item * 2);
}, delay);
// allow to cancel "leftover" jobs upon error
// return function, invoking of which will abort this job
return clearTimeout.bind(null, timeoutId);
}More examples could be found in test/test-parallel-array.js.
Also it supports named jobs, listed via object.
var parallel = require('asynckit/parallel')
, assert = require('assert')
;
var source = { first: 1, one: 1, four: 4, sixteen: 16, sixtyFour: 64, thirtyTwo: 32, eight: 8, two: 2 }
, expectedResult = { first: 2, one: 2, four: 8, sixteen: 32, sixtyFour: 128, thirtyTwo: 64, eight: 16, two: 4 }
, expectedTarget = [ 1, 1, 2, 4, 8, 16, 32, 64 ]
, expectedKeys = [ 'first', 'one', 'two', 'four', 'eight', 'sixteen', 'thirtyTwo', 'sixtyFour' ]
, target = []
, keys = []
;
parallel(source, asyncJob, function(err, result)
{
assert.deepEqual(result, expectedResult);
assert.deepEqual(target, expectedTarget);
assert.deepEqual(keys, expectedKeys);
});
// supports full value, key, callback (shortcut) interface
function asyncJob(item, key, cb)
{
// different delays (in ms) per item
var delay = item * 25;
// pretend different jobs take different time to finish
// and not in consequential order
var timeoutId = setTimeout(function() {
keys.push(key);
target.push(item);
cb(null, item * 2);
}, delay);
// allow to cancel "leftover" jobs upon error
// return function, invoking of which will abort this job
return clearTimeout.bind(null, timeoutId);
}More examples could be found in test/test-parallel-object.js.
Runs iterator over provided array sequentially. Stores output in the result array, on the matching positions. In unlikely event of an error from one of the jobs, will not proceed to the rest of the items in the list and return error along with salvaged data to the main callback function.
var serial = require('asynckit/serial')
, assert = require('assert')
;
var source = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
, expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ]
, expectedTarget = [ 0, 1, 2, 3, 4, 5, 6, 7 ]
, target = []
;
serial(source, asyncJob, function(err, result)
{
assert.deepEqual(result, expectedResult);
assert.deepEqual(target, expectedTarget);
});
// extended interface (item, key, callback)
// also supported for arrays
function asyncJob(item, key, cb)
{
target.push(key);
// it will be automatically made async
// even it iterator "returns" in the same event loop
cb(null, item * 2);
}More examples could be found in test/test-serial-array.js.
Also it supports named jobs, listed via object.
var serial = require('asynckit').serial
, assert = require('assert')
;
var source = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
, expectedResult = [ 2, 2, 8, 32, 128, 64, 16, 4 ]
, expectedTarget = [ 0, 1, 2, 3, 4, 5, 6, 7 ]
, target = []
;
var source = { first: 1, one: 1, four: 4, sixteen: 16, sixtyFour: 64, thirtyTwo: 32, eight: 8, two: 2 }
, expectedResult = { first: 2, one: 2, four: 8, sixteen: 32, sixtyFour: 128, thirtyTwo: 64, eight: 16, two: 4 }
, expectedTarget = [ 1, 1, 4, 16, 64, 32, 8, 2 ]
, target = []
;
serial(source, asyncJob, function(err, result)
{
assert.deepEqual(result, expectedResult);
assert.deepEqual(target, expectedTarget);
});
// shortcut interface (item, callback)
// works for object as well as for the arrays
function asyncJob(item, cb)
{
target.push(item);
// it will be automatically made async
// even it iterator "returns" in the same event loop
cb(null, item * 2);
}More examples could be found in test/test-serial-object.js.
Note: Since object is an unordered collection of properties, it may produce unexpected results with sequential iterations. Whenever order of the jobs’ execution is important please use serialOrdered method.
TBD
For example compare-property package.
TBD
More examples can be found in test folder.
Or open an issue with questions and/or suggestions.
A performant priority queue implementation using a Heap data structure.
There are two types of PriorityQueue in this repo: MinPriorityQueue which uses a MinHeap and considers an element with smaller priority number as higher in priority. And MaxPriorityQueue which uses a MaxHeap and cosiders an element with bigger priority number as higher in priority.
The constructor can accept a callback to get the priority from the queued element. If not passed, the priortiy should be passed with .enqueue.
// the priority not part of the enqueued element
const patientsQueue = new MinPriorityQueue();
// the priority is a prop of the queued element
const biddersQueue = new MaxPriorityQueue({ priority: (bid) => bid.value });adds an element with a priority (number) to the queue. Priority is not required here if a priority callback has been defined in the constructor. If passed here in addition to an existing constructor callback, it will override the callback one.
| params | ||
|---|---|---|
| name | type | |
| element | object | |
| priority | number | |
| runtime |
|---|
| O(log(n)) |
// MinPriorityQueue Example, where priority is the turn for example
patientsQueue.enqueue('patient y', 1); // highest priority
patientsQueue.enqueue('patient z', 3);
patientsQueue.enqueue('patient w', 4); // lowest priority
patientsQueue.enqueue('patient x', 2);
// MaxPriorityQueue Example, where priority is the bid for example. Priority is obtained from the callback.
biddersQueue.enqueue({ name: 'bidder y', value: 1000 }); // lowest priority
biddersQueue.enqueue({ name: 'bidder w', value: 2500 });
biddersQueue.enqueue({ name: 'bidder z', value: 3500 }); // highest priority
biddersQueue.enqueue({ name: 'bidder x', value: 3000 });returns the element with highest priority in the queue.
| return | description |
|---|---|
| object | object literal with “priority” and “element” props |
| runtime |
|---|
| O(1) |
console.log(patientsQueue.front()); // { priority: 1, element: 'patient y' }
console.log(biddersQueue.front()); // { priority: 3500, element: { name: 'bidder z', value: 3500 } }returns an element with lowest priority in the queue. If multiple elements exist at the lowest priority, the one that was inserted first will be returned.
| return | description |
|---|---|
| object | object literal with “priority” and “element” props |
| runtime |
|---|
| O(1) |
patientsQueue.enqueue('patient m', 4); // lowest priority
patientsQueue.enqueue('patient c', 4); // lowest priority
console.log(patientsQueue.back()); // { priority: 4, element: 'patient w' }
biddersQueue.enqueue({ name: 'bidder m', value: 1000 }); // lowest priority
biddersQueue.enqueue({ name: 'bidder c', value: 1000 }); // lowest priority
console.log(biddersQueue.back()); // { priority: 1000, element: { name: 'bidder y', value: 1000 } }removes and returns the element with highest priority in the queue.
| return | description |
|---|---|
| object | object literal with “priority” and “element” props |
| runtime |
|---|
| O(log(n)) |
console.log(patientsQueue.dequeue()); // { priority: 1, element: 'patient y' }
console.log(patientsQueue.front()); // { priority: 2, element: 'patient x' }
console.log(biddersQueue.dequeue()); // { priority: 3500, element: { name: 'bidder z', value: 3500 } }
console.log(biddersQueue.front()); // { priority: 3000, element: { name: 'bidder x', value: 3000 } }checks if the queue is empty.
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
returns the number of elements in the queue.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
returns a sorted array of elements by their priorities from highest to lowest.
| return | description |
|---|---|
| array | an array of object literals with “priority” & “element” props |
| runtime |
|---|
| O(n*log(n)) |
console.log(patientsQueue.toArray());
/*
[
{ priority: 2, element: 'patient x' },
{ priority: 3, element: 'patient z' },
{ priority: 4, element: 'patient c' },
{ priority: 4, element: 'patient w' },
{ priority: 4, element: 'patient m' }
]
*/
console.log(biddersQueue.toArray());
/*
[
{ priority: 3000, element: { name: 'bidder x', value: 3000 } },
{ priority: 2500, element: { name: 'bidder w', value: 2500 } },
{ priority: 1000, element: { name: 'bidder y', value: 1000 } },
{ priority: 1000, element: { name: 'bidder m', value: 1000 } },
{ priority: 1000, element: { name: 'bidder c', value: 1000 } }
]
*/clears all elements in the queue.
| runtime |
|---|
| O(1) |
patientsQueue.clear();
console.log(patientsQueue.size()); // 0
console.log(patientsQueue.front()); // null
console.log(patientsQueue.dequeue()); // null
biddersQueue.clear();
console.log(biddersQueue.size()); // 0
console.log(biddersQueue.front()); // null
console.log(biddersQueue.dequeue()); // nullgrunt build
The JSON5 Data Interchange Format (JSON5) is a superset of JSON that aims to alleviate some of the limitations of JSON by expanding its syntax to include some productions from ECMAScript 5.1.
This JavaScript library is the official reference implementation for JSON5 parsing and serialization libraries.
The following ECMAScript 5.1 features, which are not supported in JSON, have been extended to JSON5.
{
// comments
unquoted: 'and you can quote me on that',
singleQuotes: 'I can use "double quotes" here',
lineBreaks: "Look, Mom! \
No \\n's!",
hexadecimal: 0xdecaf,
leadingDecimalPoint: .8675309, andTrailing: 8675309.,
positiveSign: +1,
trailingComma: 'in objects', andIn: ['arrays',],
"backwardsCompatible": "with JSON",
}For a detailed explanation of the JSON5 format, please read the official specification.
This will create a global JSON5 variable.
The JSON5 API is compatible with the JSON API.
Parses a JSON5 string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned.
JSON5.parse(text[, reviver])
text: The string to parse as JSON5.reviver: If a function, this prescribes how the value originally produced by parsing is transformed, before being returned.The object corresponding to the given JSON5 text.
Converts a JavaScript value to a JSON5 string, optionally replacing values if a replacer function is specified, or optionally including only the specified properties if a replacer array is specified.
JSON5.stringify(value[, replacer[, space]]) JSON5.stringify(value[, options])
value: The value to convert to a JSON5 string.replacer: A function that alters the behavior of the stringification process, or an array of String and Number objects that serve as a whitelist for selecting/filtering the properties of the value object to be included in the JSON5 string. If this value is null or not provided, all properties of the object are included in the resulting JSON5 string.space: A String or Number object that’s used to insert white space into the output JSON5 string for readability purposes. If this is a Number, it indicates the number of space characters to use as white space; this number is capped at 10 (if it is greater, the value is just 10). Values less than 1 indicate that no space should be used. If this is a String, the string (or the first 10 characters of the string, if it’s longer than that) is used as white space. If this parameter is not provided (or is null), no white space is used. If white space is used, trailing commas will be used in objects and arrays.options: An object with the following properties:
replacer: Same as the replacer parameter.space: Same as the space parameter.quote: A String representing the quote character to use when serializing strings.A JSON5 string representing the value.
require() JSON5 filesWhen using Node.js, you can require() JSON5 files by adding the following statement.
Then you can load a JSON5 file with a Node.js require() statement. For example:
Since JSON is more widely used than JSON5, this package includes a CLI for converting JSON5 to JSON and for validating the syntax of JSON5 documents.
If <file> is not provided, then STDIN is used.
-s, --space: The number of spaces to indent or t for tabs-o, --out-file [file]: Output to the specified file, otherwise STDOUT-v, --validate: Validate JSON5 but do not output JSON-V, --version: Output the version number-h, --help: Output usage informationWhen contributing code, please write relevant tests and run npm test and npm run lint before submitting pull requests. Please use an editor that supports EditorConfig.
To report bugs or request features regarding the JSON5 data format, please submit an issue to the official specification repository.
To report bugs or request features regarding the JavaScript implentation of JSON5, please submit an issue to this repository.
Assem Kishore founded this project.
Michael Bolin independently arrived at and published some of these same ideas with awesome explanations and detail. Recommended reading: Suggested Improvements to JSON
Douglas Crockford of course designed and built JSON, but his state machine diagrams on the JSON website, as cheesy as it may sound, gave us motivation and confidence that building a new parser to implement these ideas was within reach! The original implementation of JSON5 was also modeled directly off of Doug’s open-source json_parse.js parser. We’re grateful for that clean and well-documented code.
Max Nanasy has been an early and prolific supporter, contributing multiple patches and ideas.
Andrew Eisenberg contributed the original stringify method.
Jordan Tucker has aligned JSON5 more closely with ES5, wrote the official JSON5 specification, completely rewrote the codebase from the ground up, and is actively maintaining this project.
The JSON5 Data Interchange Format (JSON5) is a superset of JSON that aims to alleviate some of the limitations of JSON by expanding its syntax to include some productions from ECMAScript 5.1.
This JavaScript library is the official reference implementation for JSON5 parsing and serialization libraries.
The following ECMAScript 5.1 features, which are not supported in JSON, have been extended to JSON5.
{
// comments
unquoted: 'and you can quote me on that',
singleQuotes: 'I can use "double quotes" here',
lineBreaks: "Look, Mom! \
No \\n's!",
hexadecimal: 0xdecaf,
leadingDecimalPoint: .8675309, andTrailing: 8675309.,
positiveSign: +1,
trailingComma: 'in objects', andIn: ['arrays',],
"backwardsCompatible": "with JSON",
}For a detailed explanation of the JSON5 format, please read the official specification.
This will create a global JSON5 variable.
The JSON5 API is compatible with the JSON API.
Parses a JSON5 string, constructing the JavaScript value or object described by the string. An optional reviver function can be provided to perform a transformation on the resulting object before it is returned.
JSON5.parse(text[, reviver])
text: The string to parse as JSON5.reviver: If a function, this prescribes how the value originally produced by parsing is transformed, before being returned.The object corresponding to the given JSON5 text.
Converts a JavaScript value to a JSON5 string, optionally replacing values if a replacer function is specified, or optionally including only the specified properties if a replacer array is specified.
JSON5.stringify(value[, replacer[, space]]) JSON5.stringify(value[, options])
value: The value to convert to a JSON5 string.replacer: A function that alters the behavior of the stringification process, or an array of String and Number objects that serve as a whitelist for selecting/filtering the properties of the value object to be included in the JSON5 string. If this value is null or not provided, all properties of the object are included in the resulting JSON5 string.space: A String or Number object that’s used to insert white space into the output JSON5 string for readability purposes. If this is a Number, it indicates the number of space characters to use as white space; this number is capped at 10 (if it is greater, the value is just 10). Values less than 1 indicate that no space should be used. If this is a String, the string (or the first 10 characters of the string, if it’s longer than that) is used as white space. If this parameter is not provided (or is null), no white space is used. If white space is used, trailing commas will be used in objects and arrays.options: An object with the following properties:
replacer: Same as the replacer parameter.space: Same as the space parameter.quote: A String representing the quote character to use when serializing strings.A JSON5 string representing the value.
require() JSON5 filesWhen using Node.js, you can require() JSON5 files by adding the following statement.
Then you can load a JSON5 file with a Node.js require() statement. For example:
Since JSON is more widely used than JSON5, this package includes a CLI for converting JSON5 to JSON and for validating the syntax of JSON5 documents.
If <file> is not provided, then STDIN is used.
-s, --space: The number of spaces to indent or t for tabs-o, --out-file [file]: Output to the specified file, otherwise STDOUT-v, --validate: Validate JSON5 but do not output JSON-V, --version: Output the version number-h, --help: Output usage informationWhen contributing code, please write relevant tests and run npm test and npm run lint before submitting pull requests. Please use an editor that supports EditorConfig.
To report bugs or request features regarding the JSON5 data format, please submit an issue to the official specification repository.
To report bugs or request features regarding the JavaScript implementation of JSON5, please submit an issue to this repository.
Assem Kishore founded this project.
Michael Bolin independently arrived at and published some of these same ideas with awesome explanations and detail. Recommended reading: Suggested Improvements to JSON
Douglas Crockford of course designed and built JSON, but his state machine diagrams on the JSON website, as cheesy as it may sound, gave us motivation and confidence that building a new parser to implement these ideas was within reach! The original implementation of JSON5 was also modeled directly off of Doug’s open-source json_parse.js parser. We’re grateful for that clean and well-documented code.
Max Nanasy has been an early and prolific supporter, contributing multiple patches and ideas.
Andrew Eisenberg contributed the original stringify method.
Jordan Tucker has aligned JSON5 more closely with ES5, wrote the official JSON5 specification, completely rewrote the codebase from the ground up, and is actively maintaining this project.
Split a string on a character except when the character is escaped.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Why use this?
Although it’s easy to split on a string:
It’s more challenging to split a string whilst respecting escaped or quoted characters.
Bad
console.log('a\\.b.c'.split('.'));
//=> ['a\\', 'b', 'c']
console.log('"a.b.c".d'.split('.'));
//=> ['"a', 'b', 'c"', 'd']Good
var split = require('split-string');
console.log(split('a\\.b.c'));
//=> ['a.b', 'c']
console.log(split('"a.b.c".d'));
//=> ['a.b.c', 'd']See the options to learn how to choose the separator or retain quotes or escaping.
var split = require('split-string');
split('a.b.c');
//=> ['a', 'b', 'c']
// respects escaped characters
split('a.b.c\\.d');
//=> ['a', 'b', 'c.d']
// respects double-quoted strings
split('a."b.c.d".e');
//=> ['a', 'b.c.d', 'e']Brackets
Also respects brackets unless disabled:
Type: object|boolean
Default: undefined
Description
If enabled, split-string will not split inside brackets. The following brackets types are supported when options.brackets is true,
Or, if object of brackets must be passed, each property on the object must be a bracket type, where the property key is the opening delimiter and property value is the closing delimiter.
Examples
// no bracket support by default
split('a.{b.c}');
//=> [ 'a', '{b', 'c}' ]
// support all basic bracket types: "<>{}[]()"
split('a.{b.c}', {brackets: true});
//=> [ 'a', '{b.c}' ]
// also supports nested brackets
split('a.{b.{c.d}.e}.f', {brackets: true});
//=> [ 'a', '{b.{c.d}.e}', 'f' ]
// support only the specified brackets
split('[a.b].(c.d)', {brackets: {'[': ']'}});
//=> [ '[a.b]', '(c', 'd)' ]Type: string
Default: .
The separator/character to split on.
Example
split('a.b,c', {sep: ','});
//=> ['a.b', 'c']
// you can also pass the separator as string as the last argument
split('a.b,c', ',');
//=> ['a.b', 'c']Type: boolean
Default: undefined
Keep backslashes in the result.
Example
Type: boolean
Default: undefined
Keep single- or double-quotes in the result.
Example
split('a."b.c.d".e');
//=> ['a', 'b.c.d', 'e']
split('a."b.c.d".e', {keepQuotes: true});
//=> ['a', '"b.c.d"', 'e']
split('a.\'b.c.d\'.e', {keepQuotes: true});
//=> ['a', '\'b.c.d\'', 'e']Type: boolean
Default: undefined
Keep double-quotes in the result.
Example
split('a."b.c.d".e');
//=> ['a', 'b.c.d', 'e']
split('a."b.c.d".e', {keepDoubleQuotes: true});
//=> ['a', '"b.c.d"', 'e']Type: boolean
Default: undefined
Keep single-quotes in the result.
Example
split('a.\'b.c.d\'.e');
//=> ['a', 'b.c.d', 'e']
split('a.\'b.c.d\'.e', {keepSingleQuotes: true});
//=> ['a', '\'b.c.d\'', 'e']Type: function
Default: undefined
Pass a function as the last argument to customize how tokens are added to the array.
Example
var arr = split('a.b', function(tok) {
if (tok.arr[tok.arr.length - 1] === 'a') {
tok.split = false;
}
});
console.log(arr);
//=> ['a.b']Properties
The tok object has the following properties:
tok.val (string) The current value about to be pushed onto the result arraytok.idx (number) the current index in the stringtok.str (string) the entire stringtok.arr (array) the result arrayAdded
Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 28 | jonschlinkert |
| 9 | doowb |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on November 19, 2017. # serve-static
This is a Node.js module available through the npm registry. Installation is done using the npm install command:
Create a new middleware function to serve files from within a given root directory. The file to serve will be determined by combining req.url with the provided root directory. When a file is not found, instead of sending a 404 response, this module will instead call next() to move on to the next middleware, allowing for stacking and fall-backs.
Enable or disable accepting ranged requests, defaults to true. Disabling this will not send Accept-Ranges and ignore the contents of the Range request header.
Enable or disable setting Cache-Control response header, defaults to true. Disabling this will ignore the immutable and maxAge options.
Set how “dotfiles” are treated when encountered. A dotfile is a file or directory that begins with a dot (“.”). Note this check is done on the path itself without checking if the path actually exists on the disk. If root is specified, only the dotfiles above the root are checked (i.e. the root itself can be within a dotfile when set to “deny”).
'allow' No special treatment for dotfiles.
'deny' Deny a request for a dotfile and 403/next().'ignore' Pretend like the dotfile does not exist and 404/next().The default value is similar to 'ignore', with the exception that this default will not ignore the files within a directory that begins with a dot.
Enable or disable etag generation, defaults to true.
Set file extension fallbacks. When set, if a file is not found, the given extensions will be added to the file name and search for. The first that exists will be served. Example: ['html', 'htm'].
The default value is false.
Set the middleware to have client errors fall-through as just unhandled requests, otherwise forward a client error. The difference is that client errors like a bad request or a request to a non-existent file will cause this middleware to simply next() to your next middleware when this value is true. When this value is false, these errors (even 404s), will invoke next(err).
Typically true is desired such that multiple physical directories can be mapped to the same web address or for routes to fill in non-existent files.
The value false can be used if this middleware is mounted at a path that is designed to be strictly a single file system directory, which allows for short-circuiting 404s for less overhead. This middleware will also reply to all methods.
The default value is true.
Enable or disable the immutable directive in the Cache-Control response header, defaults to false. If set to true, the maxAge option should also be specified to enable caching. The immutable directive will prevent supported clients from making conditional requests during the life of the maxAge option to check if the file has changed.
By default this module will send “index.html” files in response to a request on a directory. To disable this set false or to supply a new index pass a string or an array in preferred order.
Enable or disable Last-Modified header, defaults to true. Uses the file system’s last modified value.
Provide a max-age in milliseconds for http caching, defaults to 0. This can also be a string accepted by the ms module.
Redirect to trailing “/” when the pathname is a dir. Defaults to true.
Function to set custom headers on response. Alterations to the headers need to occur synchronously. The function is called as fn(res, path, stat), where the arguments are:
res the response object
path the file path that is being sentstat the stat object of the file that is being sentvar finalhandler = require('finalhandler')
var http = require('http')
var serveStatic = require('serve-static')
// Serve up public/ftp folder
var serve = serveStatic('public/ftp', { 'index': ['index.html', 'index.htm'] })
// Create server
var server = http.createServer(function onRequest (req, res) {
serve(req, res, finalhandler(req, res))
})
// Listen
server.listen(3000)var contentDisposition = require('content-disposition')
var finalhandler = require('finalhandler')
var http = require('http')
var serveStatic = require('serve-static')
// Serve up public/ftp folder
var serve = serveStatic('public/ftp', {
'index': false,
'setHeaders': setHeaders
})
// Set header to force download
function setHeaders (res, path) {
res.setHeader('Content-Disposition', contentDisposition(path))
}
// Create server
var server = http.createServer(function onRequest (req, res) {
serve(req, res, finalhandler(req, res))
})
// Listen
server.listen(3000)This is a simple example of using Express.
var express = require('express')
var serveStatic = require('serve-static')
var app = express()
app.use(serveStatic('public/ftp', { 'index': ['default.html', 'default.htm'] }))
app.listen(3000)This example shows a simple way to search through multiple directories. Files are look for in public-optimized/ first, then public/ second as a fallback.
var express = require('express')
var path = require('path')
var serveStatic = require('serve-static')
var app = express()
app.use(serveStatic(path.join(__dirname, 'public-optimized')))
app.use(serveStatic(path.join(__dirname, 'public')))
app.listen(3000)This example shows how to set a different max age depending on the served file type. In this example, HTML files are not cached, while everything else is for 1 day.
var express = require('express')
var path = require('path')
var serveStatic = require('serve-static')
var app = express()
app.use(serveStatic(path.join(__dirname, 'public'), {
maxAge: '1d',
setHeaders: setCustomCacheControl
}))
app.listen(3000)
function setCustomCacheControl (res, path) {
if (serveStatic.mime.lookup(path) === 'text/html') {
// Custom Cache-Control for HTML files
res.setHeader('Cache-Control', 'public, max-age=0')
}
}Utils for working with JavaScript classes and prototype methods.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Returns true if an array has any of the given elements, or an object has any of the give keys.
Params
obj {Object}val {String|Array}returns {Boolean}Example
cu.has(['a', 'b', 'c'], 'c');
//=> true
cu.has(['a', 'b', 'c'], ['c', 'z']);
//=> true
cu.has({a: 'b', c: 'd'}, ['c', 'z']);
//=> trueReturns true if an array or object has all of the given values.
Params
val {Object|Array}values {String|Array}returns {Boolean}Example
cu.hasAll(['a', 'b', 'c'], 'c');
//=> true
cu.hasAll(['a', 'b', 'c'], ['c', 'z']);
//=> false
cu.hasAll({a: 'b', c: 'd'}, ['c', 'z']);
//=> falseCast the given value to an array.
Params
val {String|Array}returns {Array}Example
Returns true if a value has a contructor
Params
value {Object}returns {Boolean}Example
Get the native ownPropertyNames from the constructor of the given object. An empty array is returned if the object does not have a constructor.
Params
obj {Object}: Object that has a constructor.returns {Array}: Array of keys.Example
cu.nativeKeys({a: 'b', b: 'c', c: 'd'})
//=> ['a', 'b', 'c']
cu.nativeKeys(function(){})
//=> ['length', 'caller']Returns property descriptor key if it’s an “own” property of the given object.
Params
obj {Object}key {String}returns {Object}: Returns descriptor keyExample
function App() {}
Object.defineProperty(App.prototype, 'count', {
get: function() {
return Object.keys(this).length;
}
});
cu.getDescriptor(App.prototype, 'count');
// returns:
// {
// get: [Function],
// set: undefined,
// enumerable: false,
// configurable: false
// }Copy a descriptor from one object to another.
Params
receiver {Object}provider {Object}name {String}returns {Object}Example
function App() {}
Object.defineProperty(App.prototype, 'count', {
get: function() {
return Object.keys(this).length;
}
});
var obj = {};
cu.copyDescriptor(obj, App.prototype, 'count');Copy static properties, prototype properties, and descriptors from one object to another.
Params
receiver {Object}provider {Object}omit {String|Array}: One or more properties to omitreturns {Object}Inherit the static properties, prototype properties, and descriptors from of an object.
Params
receiver {Object}provider {Object}omit {String|Array}: One or more properties to omitreturns {Object}Returns a function for extending the static properties, prototype properties, and descriptors from the Parent constructor onto Child constructors.
Params
Parent {Function}: Parent ctorextend {Function}: Optional extend function to handle custom extensions. Useful when updating methods that require a specific prototype.Child {Function}: Child ctorproto {Object}: Optionally pass additional prototype properties to inherit.returns {Object}Example
var extend = cu.extend(Parent);
Parent.extend(Child);
// optional methods
Parent.extend(Child, {
foo: function() {},
bar: function() {}
});Bubble up events emitted from static methods on the Parent ctor.
Params
Parent {Object}events {Array}: Event names to bubble upContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
| Commits | Contributor |
|---|---|
| 34 | jonschlinkert |
| 8 | doowb |
| 2 | wtgtybhertgeghgtwtg |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on January 11, 2018.
Simple, fast generation of RFC4122 UUIDS.
Features:
[Deprecation warning: The use of require('uuid') is deprecated and will not be supported after version 3.x of this module. Instead, use require('uuid/[v1|v3|v4|v5]') as shown in the examples below.]
npm install uuid
Then generate your uuid version of choice …
Version 1 (timestamp):
Version 3 (namespace):
const uuidv3 = require('uuid/v3');
// ... using predefined DNS namespace (for domain names)
uuidv3('hello.example.com', uuidv3.DNS); // ⇨ '9125a8dc-52ee-365b-a5aa-81b0b3681cf6'
// ... using predefined URL namespace (for, well, URLs)
uuidv3('http://example.com/hello', uuidv3.URL); // ⇨ 'c6235813-3ba4-3801-ae84-e0a6ebb7d138'
// ... using a custom namespace
//
// Note: Custom namespaces should be a UUID string specific to your application!
// E.g. the one here was generated using this modules `uuid` CLI.
const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341';
uuidv3('Hello, World!', MY_NAMESPACE); // ⇨ 'e8b5a51d-11c8-3310-a6ab-367563f20686'Version 4 (random):
Version 5 (namespace):
const uuidv5 = require('uuid/v5');
// ... using predefined DNS namespace (for domain names)
uuidv5('hello.example.com', uuidv5.DNS); // ⇨ 'fdda765f-fc57-5604-a269-52a7df8164ec'
// ... using predefined URL namespace (for, well, URLs)
uuidv5('http://example.com/hello', uuidv5.URL); // ⇨ '3bbcee75-cecc-5b56-8031-b6641c1ed1f1'
// ... using a custom namespace
//
// Note: Custom namespaces should be a UUID string specific to your application!
// E.g. the one here was generated using this modules `uuid` CLI.
const MY_NAMESPACE = '1b671a64-40d5-491e-99b0-da01ff1f3341';
uuidv5('Hello, World!', MY_NAMESPACE); // ⇨ '630eb68f-e0fa-5ecc-887a-7c7a62614681'const uuidv1 = require('uuid/v1');
// Incantations
uuidv1();
uuidv1(options);
uuidv1(options, buffer, offset);Generate and return a RFC4122 v1 (timestamp-based) UUID.
options - (Object) Optional uuid state to apply. Properties may include:
node - (Array) Node id as Array of 6 bytes (per 4.1.6). Default: Randomly generated ID. See note 1.clockseq - (Number between 0 - 0x3fff) RFC clock sequence. Default: An internally maintained clockseq is used.msecs - (Number) Time in milliseconds since unix Epoch. Default: The current time is used.nsecs - (Number between 0-9999) additional time, in 100-nanosecond units. Ignored if msecs is unspecified. Default: internal uuid counter is used, as per 4.2.1.2.buffer - (Array | Buffer) Array or buffer where UUID bytes are to be written.offset - (Number) Starting index in buffer at which to begin writing.
Returns buffer, if specified, otherwise the string form of the UUID
Note: The default node id (the last 12 digits in the UUID) is generated once, randomly, on process startup, and then remains unchanged for the duration of the process.
Example: Generate string UUID with fully-specified options
const v1options = {
node: [0x01, 0x23, 0x45, 0x67, 0x89, 0xab],
clockseq: 0x1234,
msecs: new Date('2011-11-01').getTime(),
nsecs: 5678
};
uuidv1(v1options); // ⇨ '710b962e-041c-11e1-9234-0123456789ab'Example: In-place generation of two binary IDs
// Generate two ids in an array
const arr = new Array();
uuidv1(null, arr, 0); // ⇨
// [
// 44, 94, 164, 192, 64, 103,
// 17, 233, 146, 52, 155, 29,
// 235, 77, 59, 125
// ]
uuidv1(null, arr, 16); // ⇨
// [
// 44, 94, 164, 192, 64, 103, 17, 233,
// 146, 52, 155, 29, 235, 77, 59, 125,
// 44, 94, 164, 193, 64, 103, 17, 233,
// 146, 52, 155, 29, 235, 77, 59, 125
// ]const uuidv3 = require('uuid/v3');
// Incantations
uuidv3(name, namespace);
uuidv3(name, namespace, buffer);
uuidv3(name, namespace, buffer, offset);Generate and return a RFC4122 v3 UUID.
name - (String | Array) “name” to create UUID withnamespace - (String | Array) “namespace” UUID either as a String or Array[16] of byte valuesbuffer - (Array | Buffer) Array or buffer where UUID bytes are to be written.offset - (Number) Starting index in buffer at which to begin writing. Default = 0Returns buffer, if specified, otherwise the string form of the UUID
Example:
const uuidv4 = require('uuid/v4')
// Incantations
uuidv4();
uuidv4(options);
uuidv4(options, buffer, offset);Generate and return a RFC4122 v4 UUID.
options - (Object) Optional uuid state to apply. Properties may include:
random - (Number[16]) Array of 16 numbers (0-255) to use in place of randomly generated valuesrng - (Function) Random # generator function that returns an Array[16] of byte values (0-255)buffer - (Array | Buffer) Array or buffer where UUID bytes are to be written.offset - (Number) Starting index in buffer at which to begin writing.Returns buffer, if specified, otherwise the string form of the UUID
Example: Generate string UUID with predefined random values
const v4options = {
random: [
0x10, 0x91, 0x56, 0xbe, 0xc4, 0xfb, 0xc1, 0xea,
0x71, 0xb4, 0xef, 0xe1, 0x67, 0x1c, 0x58, 0x36
]
};
uuidv4(v4options); // ⇨ '109156be-c4fb-41ea-b1b4-efe1671c5836'Example: Generate two IDs in a single buffer
const buffer = new Array();
uuidv4(null, buffer, 0); // ⇨
// [
// 155, 29, 235, 77, 59,
// 125, 75, 173, 155, 221,
// 43, 13, 123, 61, 203,
// 109
// ]
uuidv4(null, buffer, 16); // ⇨
// [
// 155, 29, 235, 77, 59, 125, 75, 173,
// 155, 221, 43, 13, 123, 61, 203, 109,
// 27, 157, 107, 205, 187, 253, 75, 45,
// 155, 93, 171, 141, 251, 189, 75, 237
// ]const uuidv5 = require('uuid/v5');
// Incantations
uuidv5(name, namespace);
uuidv5(name, namespace, buffer);
uuidv5(name, namespace, buffer, offset);Generate and return a RFC4122 v5 UUID.
name - (String | Array) “name” to create UUID withnamespace - (String | Array) “namespace” UUID either as a String or Array[16] of byte valuesbuffer - (Array | Buffer) Array or buffer where UUID bytes are to be written.offset - (Number) Starting index in buffer at which to begin writing. Default = 0Returns buffer, if specified, otherwise the string form of the UUID
Example:
UUIDs can be generated from the command line with the uuid command.
$ uuid
ddeb27fb-d9a0-4624-be4d-4615062daed4
$ uuid v1
02d37060-d446-11e7-a9fa-7bdae751ebe1
Type uuid --help for usage details
npm test
Markdown generated from README_js.md by # cache-base |
> Basic object cache with get, set, del, and has methods for node.js/javascript projects. |
| ## Install |
| Install with npm: |
sh npm install --save cache-base |
| ## Usage |
| ```js var Cache = require(‘cache-base’); |
| // instantiate var app = new Cache(); |
| // set values app.set(‘a’, ‘b’); app.set(‘c.d’, ‘e’); |
| // get values app.get(‘a’); //=> ‘b’ app.get(‘c’); //=> {d: ‘e’} |
| console.log(app.cache); //=> {a: ‘b’} ``` |
| Inherit |
| ```js var util = require(‘util’); var Cache = require(‘cache-base’); |
| function MyApp() { Cache.call(this); } util.inherits(MyApp, Cache); |
| var app = new MyApp(); app.set(‘a’, ‘b’); app.get(‘a’); //=> ‘b’ ``` |
| Namespace |
| Define a custom property for storing values. |
js var Cache = require('cache-base').namespace('data'); var app = new Cache(); app.set('a', 'b'); console.log(app.data); //=> {a: 'b'} |
| ## API |
| ### namespace |
Create a Cache constructor that when instantiated will store values on the given prop. |
| Params |
* prop {String}: The property name to use for storing values. * returns {Function}: Returns a custom Cache constructor |
| Example |
| ```js var Cache = require(‘cache-base’).namespace(‘data’); var cache = new Cache(); |
| cache.set(‘foo’, ‘bar’); //=> {data: {foo: ‘bar’}} ``` |
| ### Cache |
Create a new Cache. Internally the Cache constructor is created using the namespace function, with cache defined as the storage object. |
| Params |
* cache {Object}: Optionally pass an object to initialize with. |
| Example |
js var app = new Cache(); |
| ### .set |
Assign value to key. Also emits set with the key and value. |
| Params |
* key {String} * value {any} * returns {Object}: Returns the instance for chaining. |
| Events |
* emits: set with key and value as arguments. |
| Example |
``js app.on('set', function(key, val) { // do something whenset` is emitted }); |
| app.set(key, value); |
| // also takes an object or array app.set({name: ‘Halle’}); app.set([{foo: ‘bar’}, {baz: ‘quux’}]); console.log(app); //=> {name: ‘Halle’, foo: ‘bar’, baz: ‘quux’} ``` |
| ### .union |
Union array to key. Also emits set with the key and value. |
| Params |
* key {String} * value {any} * returns {Object}: Returns the instance for chaining. |
| Example |
js app.union('a.b', ['foo']); app.union('a.b', ['bar']); console.log(app.get('a')); //=> {b: ['foo', 'bar']} |
| ### .get |
Return the value of key. Dot notation may be used to get nested property values. |
| Params |
* key {String}: The name of the property to get. Dot-notation may be used. * returns {any}: Returns the value of key |
| Events |
* emits: get with key and value as arguments. |
| Example |
| ```js app.set(‘a.b.c’, ‘d’); app.get(‘a.b’); //=> {c: ‘d’} |
| app.get([‘a’, ‘b’]); //=> {c: ‘d’} ``` |
| ### .has |
Return true if app has a stored value for key, false only if value is undefined. |
| Params |
* key {String} * returns {Boolean} |
| Events |
* emits: has with key and true or false as arguments. |
| Example |
js app.set('foo', 'bar'); app.has('foo'); //=> true |
| ### .del |
| Delete one or more properties from the instance. |
| Params |
* key {String|Array}: Property name or array of property names. * returns {Object}: Returns the instance for chaining. |
| Events |
* emits: del with the key as the only argument. |
| Example |
js app.del(); // delete all // or app.del('foo'); // or app.del(['foo', 'bar']); |
| ### .clear |
| Reset the entire cache to an empty object. |
| Example |
js app.clear(); |
| ### .visit |
Visit method over the properties in the given object, or map visit over the object-elements in an array. |
| Params |
* method {String}: The name of the base method to call. * val {Object|Array}: The object or array to iterate over. * returns {Object}: Returns the instance for chaining. |
| ## About |
| ### Related projects |
* base-methods: base-methods is the foundation for creating modular, unit testable and highly pluggable node.js applications, starting… more | homepage * get-value: Use property paths (a.b.c) to get a nested value from an object. | homepage * has-value: Returns true if a value exists, false if empty. Works with deeply nested values using… more | homepage * option-cache: Simple API for managing options in JavaScript applications. | homepage * set-value: Create nested values and any intermediaries using dot notation ('a.b.c') paths. | homepage * unset-value: Delete nested properties from an object using dot notation. | homepage |
| ### Contributing |
| Pull requests and stars are always welcome. For bugs and feature requests, please create an issue. |
| | Commits | Contributor | | — | — | | 54 | jonschlinkert | | 2 | wtgtybhertgeghgtwtg | |
| ### Building docs |
| (This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.) |
| To generate the readme, run the following command: |
sh npm install -g verbose/verb#dev verb-generate-readme && verb |
| ### Running tests |
| Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command: |
sh npm install && npm test |
| ### Author |
| Jon Schlinkert |
| * github/jonschlinkert * twitter/jonschlinkert |
| *** |
| This file was generated by verb-generate-readme, v0.6.0, on July 22, 2017. # gcs-resumable-upload > Upload a file to Google Cloud Storage with built-in resumable behavior |
sh npm install gcs-resumable-upload ```js const {upload} = require(‘gcs-resumable-upload’); const fs = require(‘fs’); |
| Or from the command line: |
| If somewhere during the operation, you lose your connection to the internet or your tough-guy brother slammed your laptop shut when he saw what you were uploading, the next time you try to upload to that file, it will resume automatically from where you left off. |
| ## How it works |
| This module stores a file using ConfigStore that is written to when you first start an upload. It is aliased by the file name you are uploading to and holds the first 16kb chunk of data* as well as the unique resumable upload URI. (Resumable uploads are complicated) |
| If your upload was interrupted, next time you run the code, we ask the API how much data it has already, then simply dump all of the data coming through the pipe that it already has. |
| After the upload completes, the entry in the config file is removed. Done! |
| * The first 16kb chunk is stored to validate if you are sending the same data when you resume the upload. If not, a new resumable upload is started with the new data. |
| ## Authentication |
Oh, right. This module uses google-auth-library and accepts all of the configuration that module does to strike up a connection as config.authConfig. See authConfig. |
| ## API |
js const {gcsResumableUpload} = require('gcs-resumable-upload') const upload = gcsResumableUpload(config) |
upload is an instance of Duplexify. |
ErrorInvoked if the authorization failed or the request to start a resumable session failed.
StringThe resumable upload session URI.
This will remove the config data associated with the provided file.
ErrorInvoked if the authorization failed, the request failed, or the file wasn’t successfully uploaded.
ObjectThe response object from Gaxios.
ObjectThe file’s new metadata.
ObjectnumbernumberProgress event provides upload stats like Transferred Bytes and content length.
The file was uploaded successfully.
Here we cover the most ‘useful’ methods. If you need advanced details (creating your own tags), see wiki and examples for more info.
const yaml = require('js-yaml');
const fs = require('fs');
// Get document, or throw exception on error
try {
const doc = yaml.safeLoad(fs.readFileSync('/home/ixti/example.yml', 'utf8'));
console.log(doc);
} catch (e) {
console.log(e);
}Recommended loading way. Parses string as single YAML document. Returns either a plain object, a string or undefined, or throws YAMLException on error. By default, does not support regexps, functions and undefined. This method is safe for untrusted data.
options:
filename (default: null) - string to be used as a file path in error/warning messages.onWarning (default: null) - function to call on warning messages. Loader will call this function with an instance of YAMLException for each warning.schema (default: DEFAULT_SAFE_SCHEMA) - specifies a schema to use.
FAILSAFE_SCHEMA - only strings, arrays and plain objects: http://www.yaml.org/spec/1.2/spec.html#id2802346JSON_SCHEMA - all JSON-supported types: http://www.yaml.org/spec/1.2/spec.html#id2803231CORE_SCHEMA - same as JSON_SCHEMA: http://www.yaml.org/spec/1.2/spec.html#id2804923DEFAULT_SAFE_SCHEMA - all supported YAML types, without unsafe ones (!!js/undefined, !!js/regexp and !!js/function): http://yaml.org/type/DEFAULT_FULL_SCHEMA - all supported YAML types.json (default: false) - compatibility with JSON.parse behaviour. If true, then duplicate keys in a mapping will override values rather than throwing an error.NOTE: This function does not understand multi-document sources, it throws exception on those.
NOTE: JS-YAML does not support schema-specific tag resolution restrictions. So, the JSON schema is not as strictly defined in the YAML specification. It allows numbers in any notation, use Null and NULL as null, etc. The core schema also has no such restrictions. It allows binary notation for integers.
Use with care with untrusted sources. The same as safeLoad() but uses DEFAULT_FULL_SCHEMA by default - adds some JavaScript-specific types: !!js/function, !!js/regexp and !!js/undefined. For untrusted sources, you must additionally validate object structure to avoid injections:
const untrusted_code = '"toString": !<tag:yaml.org,2002:js/function> "function (){very_evil_thing();}"';
// I'm just converting that string, what could possibly go wrong?
require('js-yaml').load(untrusted_code) + ''Same as safeLoad(), but understands multi-document sources. Applies iterator to each document if specified, or returns array of documents.
Same as safeLoadAll() but uses DEFAULT_FULL_SCHEMA by default.
Serializes object as a YAML document. Uses DEFAULT_SAFE_SCHEMA, so it will throw an exception if you try to dump regexps or functions. However, you can disable exceptions by setting the skipInvalid option to true.
options:
indent (default: 2) - indentation width to use (in spaces).noArrayIndent (default: false) - when true, will not add an indentation level to array elementsskipInvalid (default: false) - do not throw on invalid types (like function in the safe schema) and skip pairs and single values with such types.flowLevel (default: -1) - specifies level of nesting, when to switch from block to flow style for collections. -1 means block style everwherestyles - “tag” => “style” map. Each tag may have own set of styles.schema (default: DEFAULT_SAFE_SCHEMA) specifies a schema to use.sortKeys (default: false) - if true, sort keys when dumping YAML. If a function, use the function to sort the keys.lineWidth (default: 80) - set max line width.noRefs (default: false) - if true, don’t convert duplicate objects into referencesnoCompatMode (default: false) - if true don’t try to be compatible with older yaml versions. Currently: don’t quote “yes”, “no” and so on, as required for YAML 1.1condenseFlow (default: false) - if true flow sequences will be condensed, omitting the space between a, b. Eg. '[a,b]', and omitting the space between key: value and quoting the key. Eg. '{"a":b}' Can be useful when using yaml for pretty URL query params as spaces are %-encoded.The following table show availlable styles (e.g. “canonical”, “binary”…) available for each tag (.e.g. !!null, !!int …). Yaml output is shown on the right side after => (default setting) or ->:
!!null
"canonical" -> "~"
"lowercase" => "null"
"uppercase" -> "NULL"
"camelcase" -> "Null"
!!int
"binary" -> "0b1", "0b101010", "0b1110001111010"
"octal" -> "01", "052", "016172"
"decimal" => "1", "42", "7290"
"hexadecimal" -> "0x1", "0x2A", "0x1C7A"
!!bool
"lowercase" => "true", "false"
"uppercase" -> "TRUE", "FALSE"
"camelcase" -> "True", "False"
!!float
"lowercase" => ".nan", '.inf'
"uppercase" -> ".NAN", '.INF'
"camelcase" -> ".NaN", '.Inf'
Example:
safeDump (object, {
'styles': {
'!!null': 'canonical' // dump null as ~
},
'sortKeys': true // sort object keys
});Same as safeDump() but without limits (uses DEFAULT_FULL_SCHEMA by default).
The list of standard YAML tags and corresponding JavaScipt types. See also YAML tag discussion and YAML types repository.
!!null '' # null
!!bool 'yes' # bool
!!int '3...' # number
!!float '3.14...' # number
!!binary '...base64...' # buffer
!!timestamp 'YYYY-...' # date
!!omap [ ... ] # array of key-value pairs
!!pairs [ ... ] # array or array pairs
!!set { ... } # array of objects with given keys and null values
!!str '...' # string
!!seq [ ... ] # array
!!map { ... } # object
JavaScript-specific tags
!!js/regexp /pattern/gim # RegExp
!!js/undefined '' # Undefined
!!js/function 'function () {...}' # Function
Note, that you use arrays or objects as key in JS-YAML. JS does not allow objects or arrays as keys, and stringifies (by calling toString() method) them at the moment of adding them.
Also, reading of properties on implicit block mapping keys is not supported yet. So, the following YAML document cannot be loaded.
Available as part of the Tidelift Subscription
The maintainers of js-yaml and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.
Simple key-value storage with support for multiple backends
Keyv provides a consistent interface for key-value storage across multiple backends via storage adapters. It supports TTL based expiry, making it suitable as a cache or a persistent key-value store.
There are a few existing modules similar to Keyv, however Keyv is different because it:
Map APIBufferInstall Keyv.
npm install --save keyv
By default everything is stored in memory, you can optionally also install a storage adapter.
npm install --save @keyv/redis
npm install --save @keyv/mongo
npm install --save @keyv/sqlite
npm install --save @keyv/postgres
npm install --save @keyv/mysql
Create a new Keyv instance, passing your connection string if applicable. Keyv will automatically load the correct storage adapter.
const Keyv = require('keyv');
// One of the following
const keyv = new Keyv();
const keyv = new Keyv('redis://user:pass@localhost:6379');
const keyv = new Keyv('mongodb://user:pass@localhost:27017/dbname');
const keyv = new Keyv('sqlite://path/to/database.sqlite');
const keyv = new Keyv('postgresql://user:pass@localhost:5432/dbname');
const keyv = new Keyv('mysql://user:pass@localhost:3306/dbname');
// Handle DB connection errors
keyv.on('error', err => console.log('Connection Error', err));
await keyv.set('foo', 'expires in 1 second', 1000); // true
await keyv.set('foo', 'never expires'); // true
await keyv.get('foo'); // 'never expires'
await keyv.delete('foo'); // true
await keyv.clear(); // undefinedYou can namespace your Keyv instance to avoid key collisions and allow you to clear only a certain namespace while using the same database.
const users = new Keyv('redis://user:pass@localhost:6379', { namespace: 'users' });
const cache = new Keyv('redis://user:pass@localhost:6379', { namespace: 'cache' });
await users.set('foo', 'users'); // true
await cache.set('foo', 'cache'); // true
await users.get('foo'); // 'users'
await cache.get('foo'); // 'cache'
await users.clear(); // undefined
await users.get('foo'); // undefined
await cache.get('foo'); // 'cache'Keyv uses json-buffer for data serialization to ensure consistency across different backends.
You can optionally provide your own serialization functions to support extra data types or to serialize to something other than JSON.
Warning: Using custom serializers means you lose any guarantee of data consistency. You should do extensive testing with your serialisation functions and chosen storage engine.
The official storage adapters are covered by over 150 integration tests to guarantee consistent behaviour. They are lightweight, efficient wrappers over the DB clients making use of indexes and native TTLs where available.
You can also use third-party storage adapters or build your own. Keyv will wrap these storage adapters in TTL functionality and handle complex types internally.
const Keyv = require('keyv');
const myAdapter = require('./my-storage-adapter');
const keyv = new Keyv({ store: myAdapter });Any store that follows the Map api will work.
For example, quick-lru is a completely unrelated module that implements the Map API.
const Keyv = require('keyv');
const QuickLRU = require('quick-lru');
const lru = new QuickLRU({ maxSize: 1000 });
const keyv = new Keyv({ store: lru });The following are third-party storage adapters compatible with Keyv:
Keyv is designed to be easily embedded into other modules to add cache support. The recommended pattern is to expose a cache option in your modules options which is passed through to Keyv. Caching will work in memory by default and users have the option to also install a Keyv storage adapter and pass in a connection string, or any other storage that implements the Map API.
You should also set a namespace for your module so you can safely call .clear() without clearing unrelated app data.
Inside your module:
class AwesomeModule {
constructor(opts) {
this.cache = new Keyv({
uri: typeof opts.cache === 'string' && opts.cache,
store: typeof opts.cache !== 'string' && opts.cache,
namespace: 'awesome-module'
});
}
}Now it can be consumed like this:
const AwesomeModule = require('awesome-module');
// Caches stuff in memory by default
const awesomeModule = new AwesomeModule();
// After npm install --save keyv-redis
const awesomeModule = new AwesomeModule({ cache: 'redis://localhost' });
// Some third-party module that implements the Map API
const awesomeModule = new AwesomeModule({ cache: some3rdPartyStore });Returns a new Keyv instance.
The Keyv instance is also an EventEmitter that will emit an 'error' event if the storage adapter connection fails.
Type: String
Default: undefined
The connection string URI.
Merged into the options object as options.uri.
Type: Object
The options object is also passed through to the storage adapter. Check your storage adapter docs for any extra options.
Type: String
Default: 'keyv'
Namespace for the current instance.
Type: Number
Default: undefined
Default TTL. Can be overridden by specififying a TTL on .set().
Type: Function
Default: JSONB.stringify
A custom serialization function.
Type: Function
Default: JSONB.parse
A custom deserialization function.
Type: Storage adapter instance
Default: new Map()
The storage adapter instance to be used by Keyv.
Type: String
Default: undefined
Specify an adapter to use. e.g 'redis' or 'mongodb'.
Keys must always be strings. Values can be of any type.
Set a value.
By default keys are persistent. You can set an expiry TTL in milliseconds.
Returns true.
Returns the value.
Deletes an entry.
Returns true if the key existed, false if not.
Delete all entries in the current namespace.
Returns undefined.
Get the native type of a value.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Install with bower
es5, browser and es6 ready
var kindOf = require('kind-of');
kindOf(undefined);
//=> 'undefined'
kindOf(null);
//=> 'null'
kindOf(true);
//=> 'boolean'
kindOf(false);
//=> 'boolean'
kindOf(new Boolean(true));
//=> 'boolean'
kindOf(new Buffer(''));
//=> 'buffer'
kindOf(42);
//=> 'number'
kindOf(new Number(42));
//=> 'number'
kindOf('str');
//=> 'string'
kindOf(new String('str'));
//=> 'string'
kindOf(arguments);
//=> 'arguments'
kindOf({});
//=> 'object'
kindOf(Object.create(null));
//=> 'object'
kindOf(new Test());
//=> 'object'
kindOf(new Date());
//=> 'date'
kindOf([]);
//=> 'array'
kindOf([1, 2, 3]);
//=> 'array'
kindOf(new Array());
//=> 'array'
kindOf(/foo/);
//=> 'regexp'
kindOf(new RegExp('foo'));
//=> 'regexp'
kindOf(function () {});
//=> 'function'
kindOf(function * () {});
//=> 'function'
kindOf(new Function());
//=> 'function'
kindOf(new Map());
//=> 'map'
kindOf(new WeakMap());
//=> 'weakmap'
kindOf(new Set());
//=> 'set'
kindOf(new WeakSet());
//=> 'weakset'
kindOf(Symbol('str'));
//=> 'symbol'
kindOf(new Int8Array());
//=> 'int8array'
kindOf(new Uint8Array());
//=> 'uint8array'
kindOf(new Uint8ClampedArray());
//=> 'uint8clampedarray'
kindOf(new Int16Array());
//=> 'int16array'
kindOf(new Uint16Array());
//=> 'uint16array'
kindOf(new Int32Array());
//=> 'int32array'
kindOf(new Uint32Array());
//=> 'uint32array'
kindOf(new Float32Array());
//=> 'float32array'
kindOf(new Float64Array());
//=> 'float64array'Added
promise supportAdded
Set Iterator and Map Iterator supportFixed
generatorfunction for generator functionsBenchmarked against typeof and type-of. Note that performaces is slower for es6 features Map, WeakMap, Set and WeakSet.
#1: array
current x 23,329,397 ops/sec ±0.82% (94 runs sampled)
lib-type-of x 4,170,273 ops/sec ±0.55% (94 runs sampled)
lib-typeof x 9,686,935 ops/sec ±0.59% (98 runs sampled)
#2: boolean
current x 27,197,115 ops/sec ±0.85% (94 runs sampled)
lib-type-of x 3,145,791 ops/sec ±0.73% (97 runs sampled)
lib-typeof x 9,199,562 ops/sec ±0.44% (99 runs sampled)
#3: date
current x 20,190,117 ops/sec ±0.86% (92 runs sampled)
lib-type-of x 5,166,970 ops/sec ±0.74% (94 runs sampled)
lib-typeof x 9,610,821 ops/sec ±0.50% (96 runs sampled)
#4: function
current x 23,855,460 ops/sec ±0.60% (97 runs sampled)
lib-type-of x 5,667,740 ops/sec ±0.54% (100 runs sampled)
lib-typeof x 10,010,644 ops/sec ±0.44% (100 runs sampled)
#5: null
current x 27,061,047 ops/sec ±0.97% (96 runs sampled)
lib-type-of x 13,965,573 ops/sec ±0.62% (97 runs sampled)
lib-typeof x 8,460,194 ops/sec ±0.61% (97 runs sampled)
#6: number
current x 25,075,682 ops/sec ±0.53% (99 runs sampled)
lib-type-of x 2,266,405 ops/sec ±0.41% (98 runs sampled)
lib-typeof x 9,821,481 ops/sec ±0.45% (99 runs sampled)
#7: object
current x 3,348,980 ops/sec ±0.49% (99 runs sampled)
lib-type-of x 3,245,138 ops/sec ±0.60% (94 runs sampled)
lib-typeof x 9,262,952 ops/sec ±0.59% (99 runs sampled)
#8: regex
current x 21,284,827 ops/sec ±0.72% (96 runs sampled)
lib-type-of x 4,689,241 ops/sec ±0.43% (100 runs sampled)
lib-typeof x 8,957,593 ops/sec ±0.62% (98 runs sampled)
#9: string
current x 25,379,234 ops/sec ±0.58% (96 runs sampled)
lib-type-of x 3,635,148 ops/sec ±0.76% (93 runs sampled)
lib-typeof x 9,494,134 ops/sec ±0.49% (98 runs sampled)
#10: undef
current x 27,459,221 ops/sec ±1.01% (93 runs sampled)
lib-type-of x 14,360,433 ops/sec ±0.52% (99 runs sampled)
lib-typeof x 23,202,868 ops/sec ±0.59% (94 runs sampled)In 7 out of 8 cases, this library is 2x-10x faster than other top libraries included in the benchmarks. There are a few things that lead to this performance advantage, none of them hard and fast rules, but all of them simple and repeatable in almost any code library:
typeof checks were being used in my own libraries and other libraries I use a lot.Object constructor). I opted to make this check happen by process of elimination rather than brute force up front (e.g. by using something like val.constructor.name), so that every other type check would not be penalized it..slice(8, -1).toLowerCase(); just to get the word regex? It’s much faster to do if (type === '[object RegExp]') return 'regex'require() statement to use the library anyway, regardless of how the code is written.kind-of is more correct than other type checking libs I’ve looked at. For example, here are some differing results from other popular libs:
Incorrectly tests instances of custom constructors (pretty common):
Returns object instead of arguments:
Incorrectly returns object for generator functions, buffers, Map, Set, WeakMap and WeakSet:
function * foo() {}
console.log(typeOf(foo));
//=> 'object'
console.log(typeOf(new Buffer('')));
//=> 'object'
console.log(typeOf(new Map()));
//=> 'object'
console.log(typeOf(new Set()));
//=> 'object'
console.log(typeOf(new WeakMap()));
//=> 'object'
console.log(typeOf(new WeakSet()));
//=> 'object'Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
true if the given string looks like a glob pattern or an extglob pattern… more | homepagetrue if the value is a primitive. | homepage| Commits | Contributor |
|---|---|
| 82 | jonschlinkert |
| 3 | aretecode |
| 2 | miguelmota |
| 1 | dtothefp |
| 1 | ksheedlo |
| 1 | pdehaan |
| 1 | laggingreflex |
| 1 | charlike |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on October 13, 2017. # kind-of
Get the native type of a value.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Install with bower
es5, browser and es6 ready
var kindOf = require('kind-of');
kindOf(undefined);
//=> 'undefined'
kindOf(null);
//=> 'null'
kindOf(true);
//=> 'boolean'
kindOf(false);
//=> 'boolean'
kindOf(new Boolean(true));
//=> 'boolean'
kindOf(new Buffer(''));
//=> 'buffer'
kindOf(42);
//=> 'number'
kindOf(new Number(42));
//=> 'number'
kindOf('str');
//=> 'string'
kindOf(new String('str'));
//=> 'string'
kindOf(arguments);
//=> 'arguments'
kindOf({});
//=> 'object'
kindOf(Object.create(null));
//=> 'object'
kindOf(new Test());
//=> 'object'
kindOf(new Date());
//=> 'date'
kindOf([]);
//=> 'array'
kindOf([1, 2, 3]);
//=> 'array'
kindOf(new Array());
//=> 'array'
kindOf(/foo/);
//=> 'regexp'
kindOf(new RegExp('foo'));
//=> 'regexp'
kindOf(function () {});
//=> 'function'
kindOf(function * () {});
//=> 'function'
kindOf(new Function());
//=> 'function'
kindOf(new Map());
//=> 'map'
kindOf(new WeakMap());
//=> 'weakmap'
kindOf(new Set());
//=> 'set'
kindOf(new WeakSet());
//=> 'weakset'
kindOf(Symbol('str'));
//=> 'symbol'
kindOf(new Int8Array());
//=> 'int8array'
kindOf(new Uint8Array());
//=> 'uint8array'
kindOf(new Uint8ClampedArray());
//=> 'uint8clampedarray'
kindOf(new Int16Array());
//=> 'int16array'
kindOf(new Uint16Array());
//=> 'uint16array'
kindOf(new Int32Array());
//=> 'int32array'
kindOf(new Uint32Array());
//=> 'uint32array'
kindOf(new Float32Array());
//=> 'float32array'
kindOf(new Float64Array());
//=> 'float64array'Added
promise supportAdded
Set Iterator and Map Iterator supportFixed
generatorfunction for generator functionsBenchmarked against typeof and type-of. Note that performaces is slower for es6 features Map, WeakMap, Set and WeakSet.
#1: array
current x 23,329,397 ops/sec ±0.82% (94 runs sampled)
lib-type-of x 4,170,273 ops/sec ±0.55% (94 runs sampled)
lib-typeof x 9,686,935 ops/sec ±0.59% (98 runs sampled)
#2: boolean
current x 27,197,115 ops/sec ±0.85% (94 runs sampled)
lib-type-of x 3,145,791 ops/sec ±0.73% (97 runs sampled)
lib-typeof x 9,199,562 ops/sec ±0.44% (99 runs sampled)
#3: date
current x 20,190,117 ops/sec ±0.86% (92 runs sampled)
lib-type-of x 5,166,970 ops/sec ±0.74% (94 runs sampled)
lib-typeof x 9,610,821 ops/sec ±0.50% (96 runs sampled)
#4: function
current x 23,855,460 ops/sec ±0.60% (97 runs sampled)
lib-type-of x 5,667,740 ops/sec ±0.54% (100 runs sampled)
lib-typeof x 10,010,644 ops/sec ±0.44% (100 runs sampled)
#5: null
current x 27,061,047 ops/sec ±0.97% (96 runs sampled)
lib-type-of x 13,965,573 ops/sec ±0.62% (97 runs sampled)
lib-typeof x 8,460,194 ops/sec ±0.61% (97 runs sampled)
#6: number
current x 25,075,682 ops/sec ±0.53% (99 runs sampled)
lib-type-of x 2,266,405 ops/sec ±0.41% (98 runs sampled)
lib-typeof x 9,821,481 ops/sec ±0.45% (99 runs sampled)
#7: object
current x 3,348,980 ops/sec ±0.49% (99 runs sampled)
lib-type-of x 3,245,138 ops/sec ±0.60% (94 runs sampled)
lib-typeof x 9,262,952 ops/sec ±0.59% (99 runs sampled)
#8: regex
current x 21,284,827 ops/sec ±0.72% (96 runs sampled)
lib-type-of x 4,689,241 ops/sec ±0.43% (100 runs sampled)
lib-typeof x 8,957,593 ops/sec ±0.62% (98 runs sampled)
#9: string
current x 25,379,234 ops/sec ±0.58% (96 runs sampled)
lib-type-of x 3,635,148 ops/sec ±0.76% (93 runs sampled)
lib-typeof x 9,494,134 ops/sec ±0.49% (98 runs sampled)
#10: undef
current x 27,459,221 ops/sec ±1.01% (93 runs sampled)
lib-type-of x 14,360,433 ops/sec ±0.52% (99 runs sampled)
lib-typeof x 23,202,868 ops/sec ±0.59% (94 runs sampled)In 7 out of 8 cases, this library is 2x-10x faster than other top libraries included in the benchmarks. There are a few things that lead to this performance advantage, none of them hard and fast rules, but all of them simple and repeatable in almost any code library:
typeof checks were being used in my own libraries and other libraries I use a lot.Object constructor). I opted to make this check happen by process of elimination rather than brute force up front (e.g. by using something like val.constructor.name), so that every other type check would not be penalized it..slice(8, -1).toLowerCase(); just to get the word regex? It’s much faster to do if (type === '[object RegExp]') return 'regex'require() statement to use the library anyway, regardless of how the code is written.kind-of is more correct than other type checking libs I’ve looked at. For example, here are some differing results from other popular libs:
Incorrectly tests instances of custom constructors (pretty common):
Returns object instead of arguments:
Incorrectly returns object for generator functions, buffers, Map, Set, WeakMap and WeakSet:
function * foo() {}
console.log(typeOf(foo));
//=> 'object'
console.log(typeOf(new Buffer('')));
//=> 'object'
console.log(typeOf(new Map()));
//=> 'object'
console.log(typeOf(new Set()));
//=> 'object'
console.log(typeOf(new WeakMap()));
//=> 'object'
console.log(typeOf(new WeakSet()));
//=> 'object'Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
true if the given string looks like a glob pattern or an extglob pattern… more | homepagetrue if the value is a primitive. | homepage| Commits | Contributor |
|---|---|
| 82 | jonschlinkert |
| 3 | aretecode |
| 2 | miguelmota |
| 1 | dtothefp |
| 1 | ksheedlo |
| 1 | pdehaan |
| 1 | laggingreflex |
| 1 | charlike |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on October 13, 2017. # type-check 
For updates on type-check, follow me on twitter.
npm install type-check
// Basic types:
var typeCheck = require('type-check').typeCheck;
typeCheck('Number', 1); // true
typeCheck('Number', 'str'); // false
typeCheck('Error', new Error); // true
typeCheck('Undefined', undefined); // true
// Comment
typeCheck('count::Number', 1); // true
// One type OR another type:
typeCheck('Number | String', 2); // true
typeCheck('Number | String', 'str'); // true
// Wildcard, matches all types:
typeCheck('*', 2) // true
// Array, all elements of a single type:
typeCheck('[Number]', [1, 2, 3]); // true
typeCheck('[Number]', [1, 'str', 3]); // false
// Tuples, or fixed length arrays with elements of different types:
typeCheck('(String, Number)', ['str', 2]); // true
typeCheck('(String, Number)', ['str']); // false
typeCheck('(String, Number)', ['str', 2, 5]); // false
// Object properties:
typeCheck('{x: Number, y: Boolean}', {x: 2, y: false}); // true
typeCheck('{x: Number, y: Boolean}', {x: 2}); // false
typeCheck('{x: Number, y: Maybe Boolean}', {x: 2}); // true
typeCheck('{x: Number, y: Boolean}', {x: 2, y: false, z: 3}); // false
typeCheck('{x: Number, y: Boolean, ...}', {x: 2, y: false, z: 3}); // true
// A particular type AND object properties:
typeCheck('RegExp{source: String, ...}', /re/i); // true
typeCheck('RegExp{source: String, ...}', {source: 're'}); // false
// Custom types:
var opt = {customTypes:
{Even: { typeOf: 'Number', validate: function(x) { return x % 2 === 0; }}}};
typeCheck('Even', 2, opt); // true
// Nested:
var type = '{a: (String, [Number], {y: Array, ...}), b: Error{message: String, ...}}'
typeCheck(type, {a: ['hi', [1, 2, 3], {y: [1, 'ms']}], b: new Error('oh no')}); // trueCheck out the type syntax format and guide.
require('type-check'); returns an object that exposes four properties. VERSION is the current version of the library as a string. typeCheck, parseType, and parsedTypeCheck are functions.
// typeCheck(type, input, options);
typeCheck('Number', 2); // true
// parseType(type);
var parsedType = parseType('Number'); // object
// parsedTypeCheck(parsedType, input, options);
parsedTypeCheck(parsedType, 2); // truetypeCheck checks a JavaScript value input against type written in the type format (and taking account the optional options) and returns whether the input matches the type.
String - the type written in the type format which to check against* - any JavaScript value, which is to be checked against the typeMaybe Object - an optional parameter specifying additional options, currently the only available option is specifying custom typesBoolean - whether the input matches the type
parseType parses string type written in the type format into an object representing the parsed type.
String - the type written in the type format which to parseObject - an object in the parsed type format representing the parsed type
parsedTypeCheck checks a JavaScript value input against parsed type in the parsed type format (and taking account the optional options) and returns whether the input matches the type. Use this in conjunction with parseType if you are going to use a type more than once.
Object - the type in the parsed type format which to check against* - any JavaScript value, which is to be checked against the typeMaybe Object - an optional parameter specifying additional options, currently the only available option is specifying custom typesBoolean - whether the input matches the type
parsedTypeCheck([{type: 'Number'}], 2); // true
var parsedType = parseType('String');
parsedTypeCheck(parsedType, 'str'); // trueWhite space is ignored. The root node is a Types.
[\$\w]+ - a group of any lower or upper case letters, numbers, underscores, or dollar signs - eg. StringIdentifier, an Identifier followed by a Structure, just a Structure, or a wildcard * - eg. String, Object{x: Number}, {x: Number}, Array{0: String, 1: Boolean, length: Number}, *Identifier followed by a ::), optionally the identifier Maybe, one or more Type, separated by | - eg. Number, String | Date, Maybe Number, Maybe Boolean | StringFields, or a Tuple, or an Array - eg. {x: Number}, (String, Number), [Date]{, followed one or more Field separated by a comma , (trailing comma , is permitted), optionally an ... (always preceded by a comma ,), followed by a } - eg. {x: Number, y: String}, {k: Function, ...}Identifier, followed by a colon :, followed by Types - eg. x: Date | String, y: Boolean(, followed by one or more Types separated by a comma , (trailing comma , is permitted), followed by a ) - eg (Date), (Number, Date)[ followed by exactly one Types followed by a ] - eg. [Boolean], [Boolean | Null]type-check uses Object.toString to find out the basic type of a value. Specifically,
A basic type, eg. Number, uses this check. This is much more versatile than using typeof - for example, with document, typeof produces 'object' which isn’t that useful, and our technique produces 'HTMLDocument'.
You may check for multiple types by separating types with a |. The checker proceeds from left to right, and passes if the value is any of the types - eg. String | Boolean first checks if the value is a string, and then if it is a boolean. If it is none of those, then it returns false.
Adding a Maybe in front of a list of multiple types is the same as also checking for Null and Undefined - eg. Maybe String is equivalent to Undefined | Null | String.
You may add a comment to remind you of what the type is for by following an identifier with a :: before a type (or multiple types). The comment is simply thrown out.
The wildcard * matches all types.
There are three types of structures for checking the contents of a value: ‘fields’, ‘tuple’, and ‘array’.
If used by itself, a ‘fields’ structure will pass with any type of object as long as it is an instance of Object and the properties pass - this allows for duck typing - eg. {x: Boolean}.
To check if the properties pass, and the value is of a certain type, you can specify the type - eg. Error{message: String}.
If you want to make a field optional, you can simply use Maybe - eg. {x: Boolean, y: Maybe String} will still pass if y is undefined (or null).
If you don’t care if the value has properties beyond what you have specified, you can use the ‘etc’ operator ... - eg. {x: Boolean, ...} will match an object with an x property that is a boolean, and with zero or more other properties.
For an array, you must specify one or more types (separated by |) - it will pass for something of any length as long as each element passes the types provided - eg. [Number], [Number | String].
A tuple checks for a fixed number of elements, each of a potentially different type. Each element is separated by a comma - eg. (String, Number).
An array and tuple structure check that the value is of type Array by default, but if another type is specified, they will check for that instead - eg. Int32Array[Number]. You can use the wildcard * to search for any type at all.
Check out the type precedence library for type-check.
Options is an object. It is an optional parameter to the typeCheck and parsedTypeCheck functions. The only current option is customTypes.
Example:
var options = {
customTypes: {
Even: {
typeOf: 'Number',
validate: function(x) {
return x % 2 === 0;
}
}
}
};
typeCheck('Even', 2, options); // true
typeCheck('Even', 3, options); // falsecustomTypes allows you to set up custom types for validation. The value of this is an object. The keys of the object are the types you will be matching. Each value of the object will be an object having a typeOf property - a string, and validate property - a function.
The typeOf property is the type the value should be (optional - if not set only validate will be used), and validate is a function which should return true if the value is of that type. validate receives one parameter, which is the value that we are checking.
type-check is written in LiveScript - a language that compiles to JavaScript. It also uses the prelude.ls library.
CachePolicy tells when responses can be reused from a cache, taking into account HTTP RFC 7234 rules for user agents and shared caches. It also implements RFC 5861, implementing stale-if-error and stale-while-revalidate. It’s aware of many tricky details such as the Vary header, proxy revalidation, and authenticated responses.
Cacheability of an HTTP response depends on how it was requested, so both request and response are required to create the policy.
const policy = new CachePolicy(request, response, options);
if (!policy.storable()) {
// throw the response away, it's not usable at all
return;
}
// Cache the data AND the policy object in your cache
// (this is pseudocode, roll your own cache (lru-cache package works))
letsPretendThisIsSomeCache.set(
request.url,
{ policy, response },
policy.timeToLive()
);// And later, when you receive a new request:
const { policy, response } = letsPretendThisIsSomeCache.get(newRequest.url);
// It's not enough that it exists in the cache, it has to match the new request, too:
if (policy && policy.satisfiesWithoutRevalidation(newRequest)) {
// OK, the previous response can be used to respond to the `newRequest`.
// Response headers have to be updated, e.g. to add Age and remove uncacheable headers.
response.headers = policy.responseHeaders();
return response;
}It may be surprising, but it’s not enough for an HTTP response to be fresh to satisfy a request. It may need to match request headers specified in Vary. Even a matching fresh response may still not be usable if the new request restricted cacheability, etc.
The key method is satisfiesWithoutRevalidation(newRequest), which checks whether the newRequest is compatible with the original request and whether all caching conditions are met.
Request and response must have a headers property with all header names in lower case. url, status and method are optional (defaults are any URL, status 200, and GET method).
const request = {
url: '/',
method: 'GET',
headers: {
accept: '*/*',
},
};
const response = {
status: 200,
headers: {
'cache-control': 'public, max-age=7234',
},
};
const options = {
shared: true,
cacheHeuristic: 0.1,
immutableMinTimeToLive: 24 * 3600 * 1000, // 24h
ignoreCargoCult: false,
};If options.shared is true (default), then the response is evaluated from a perspective of a shared cache (i.e. private is not cacheable and s-maxage is respected). If options.shared is false, then the response is evaluated from a perspective of a single-user cache (i.e. private is cacheable and s-maxage is ignored). shared: true is recommended for HTTP clients.
options.cacheHeuristic is a fraction of response’s age that is used as a fallback cache duration. The default is 0.1 (10%), e.g. if a file hasn’t been modified for 100 days, it’ll be cached for 100*0.1 = 10 days.
options.immutableMinTimeToLive is a number of milliseconds to assume as the default time to cache responses with Cache-Control: immutable. Note that per RFC these can become stale, so max-age still overrides the default.
If options.ignoreCargoCult is true, common anti-cache directives will be completely ignored if the non-standard pre-check and post-check directives are present. These two useless directives are most commonly found in bad StackOverflow answers and PHP’s “session limiter” defaults.
storable()Returns true if the response can be stored in a cache. If it’s false then you MUST NOT store either the request or the response.
satisfiesWithoutRevalidation(newRequest)This is the most important method. Use this method to check whether the cached response is still fresh in the context of the new request.
If it returns true, then the given request matches the original response this cache policy has been created with, and the response can be reused without contacting the server. Note that the old response can’t be returned without being updated, see responseHeaders().
If it returns false, then the response may not be matching at all (e.g. it’s for a different URL or method), or may require to be refreshed first (see revalidationHeaders()).
responseHeaders()Returns updated, filtered set of response headers to return to clients receiving the cached response. This function is necessary, because proxies MUST always remove hop-by-hop headers (such as TE and Connection) and update response’s Age to avoid doubling cache time.
timeToLive()Returns approximate time in milliseconds until the response becomes stale (i.e. not fresh).
After that time (when timeToLive() <= 0) the response might not be usable without revalidation. However, there are exceptions, e.g. a client can explicitly allow stale responses, so always check with satisfiesWithoutRevalidation(). stale-if-error and stale-while-revalidate extend the time to live of the cache, that can still be used if stale.
toObject()/fromObject(json)Chances are you’ll want to store the CachePolicy object along with the cached response. obj = policy.toObject() gives a plain JSON-serializable object. policy = CachePolicy.fromObject(obj) creates an instance from it.
When a cached response has expired, it can be made fresh again by making a request to the origin server. The server may respond with status 304 (Not Modified) without sending the response body again, saving bandwidth.
The following methods help perform the update efficiently and correctly.
revalidationHeaders(newRequest)Returns updated, filtered set of request headers to send to the origin server to check if the cached response can be reused. These headers allow the origin server to return status 304 indicating the response is still fresh. All headers unrelated to caching are passed through as-is.
Use this method when updating cache from the origin server.
revalidatedPolicy(revalidationRequest, revalidationResponse)Use this method to update the cache after receiving a new response from the origin server. It returns an object with two keys:
policy — A new CachePolicy with HTTP headers updated from revalidationResponse. You can always replace the old cached CachePolicy with the new one.modified — Boolean indicating whether the response body has changed.
false, then a valid 304 Not Modified response has been received, and you can reuse the old cached response body. This is also affected by stale-if-error.true, you should use new response’s body (if present), or make another request to the origin server without any conditional headers (i.e. don’t use revalidationHeaders() this time) to get the new resource.// When serving requests from cache:
const { oldPolicy, oldResponse } = letsPretendThisIsSomeCache.get(
newRequest.url
);
if (!oldPolicy.satisfiesWithoutRevalidation(newRequest)) {
// Change the request to ask the origin server if the cached response can be used
newRequest.headers = oldPolicy.revalidationHeaders(newRequest);
// Send request to the origin server. The server may respond with status 304
const newResponse = await makeRequest(newRequest);
// Create updated policy and combined response from the old and new data
const { policy, modified } = oldPolicy.revalidatedPolicy(
newRequest,
newResponse
);
const response = modified ? newResponse : oldResponse;
// Update the cache with the newer/fresher response
letsPretendThisIsSomeCache.set(
newRequest.url,
{ policy, response },
policy.timeToLive()
);
// And proceed returning cached response as usual
response.headers = policy.responseHeaders();
return response;
}
Cache-Control response header with all the quirks.Expires with check for bad clocks.Pragma response header.Age response header.Vary response header.stale-if-errorIf-Range (but correctly supports them as non-cacheable)DatePer the RFC, the cache should take into account the time between server-supplied Date and the time it received the response. The RFC-mandated behavior creates two problems:
max-age=1 trick (which is useful for reverse proxies on high-traffic servers).Previous versions of this library had an option to ignore the server date if it was “too inaccurate”. To support the max-age=1 trick the library also has to ignore dates that pretty accurate. There’s no point of having an option to trust dates that are only a bit inaccurate, so this library won’t trust any server dates. max-age will be interpreted from the time the response has been received, not from when it has been sent. This will affect only RFC 1149 networks.
A tiny, fast JavaScript parser written in JavaScript.
You are welcome to report bugs or create pull requests on github. For questions and discussion, please use the Tern discussion forum.
The easiest way to install acorn is from npm:
Alternately, you can download the source and build acorn yourself:
parse(input, options) is the main interface to the library. The input parameter is a string, options can be undefined or an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the ESTree spec.
When encountering a syntax error, the parser will raise a SyntaxError object with a meaningful message. The error object will have a pos property that indicates the string offset at which the error occurred, and a loc object that contains a {line, column} object referring to that same position.
Options can be provided by passing a second argument, which should be an object containing any of these fields:
ecmaVersion: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019) or 11 (2020, partial support). This influences support for strict mode, the set of reserved words, and support for new syntax features. Default is 10.
NOTE: Only ‘stage 4’ (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features can be implemented through plugins.
sourceType: Indicate the mode the code should be parsed in. Can be either "script" or "module". This influences global strict mode and parsing of import and export declarations.
NOTE: If set to "module", then static import / export syntax will be valid, even if ecmaVersion is less than 6.
onInsertedSemicolon: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if locations is on, also a {line, column} object representing this position.
onTrailingComma: Like onInsertedSemicolon, but for trailing commas.
allowReserved: If false, using a reserved word will generate an error. Defaults to true for ecmaVersion 3, false for higher versions. When given the value "never", reserved words and keywords can also not be used as property names (as in Internet Explorer’s old parser).
allowReturnOutsideFunction: By default, a return statement at the top level raises an error. Set this to true to accept such code.
allowImportExportEverywhere: By default, import and export declarations can only appear at a program’s top level. Setting this option to true allows them anywhere where a statement is allowed.
allowAwaitOutsideFunction: By default, await expressions can only appear inside async functions. Setting this option to true allows to have top-level await expressions. They are still not allowed in non-async functions, though.
allowHashBang: When this is enabled (off by default), if the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.
locations: When true, each node has a loc object attached with start and end subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form. Default is false.
onToken: If a function is passed for this option, each found token will be passed in same format as tokens returned from tokenizer().getToken().
If array is passed, each found token is pushed to it.
Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.
onComment: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters:
block: true if the comment is a block comment, false if it is a line comment.text: The content of the comment.start: Character offset of the start of the comment.end: Character offset of the end of the comment.When the locations options is on, the {line, column} locations of the comment’s start and end are passed as two additional parameters.
If array is passed for this option, each found comment is pushed to it as object in Esprima format:
{
"type": "Line" | "Block",
"value": "comment text",
"start": Number,
"end": Number,
// If `locations` option is on:
"loc": {
"start": {line: Number, column: Number}
"end": {line: Number, column: Number}
},
// If `ranges` option is on:
"range": [Number, Number]
}Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.
ranges: Nodes have their start and end characters offsets recorded in start and end properties (directly on the node, rather than the loc object, which holds line/column data. To also add a semi-standardized range property holding a [start, end] array with the same numbers, set the ranges option to true.
program: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the program option in subsequent parses. This will add the toplevel forms of the parsed file to the “Program” (top) node of an existing parse tree.
sourceFile: When the locations option is true, you can pass this option to add a source attribute in every node’s loc object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose.
directSourceFile: Like sourceFile, but a sourceFile property will be added (regardless of the location option) directly to the nodes, rather than the loc object.
preserveParens: If this option is true, parenthesized expressions are represented by (non-standard) ParenthesizedExpression nodes that have a single expression property containing the expression inside parentheses.
parseExpressionAt(input, offset, options) will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression.
tokenizer(input, options) returns an object with a getToken method that can be called repeatedly to get the next token, a {start, end, type, value} object (with added loc property when the locations option is enabled and range property when the ranges option is enabled). When the token’s type is tokTypes.eof, you should stop calling the method, since it will keep returning that same token forever.
In ES6 environment, returned result can be used as any other protocol-compliant iterable:
for (let token of acorn.tokenizer(str)) {
// iterate over the tokens
}
// transform code to array of tokens:
var tokens = [...acorn.tokenizer(str)];tokTypes holds an object mapping names to the token type objects that end up in the type properties of tokens.
getLineInfo(input, offset) can be used to get a {line, column} object for a given program string and offset.
Parser classInstances of the Parser class contain all the state and logic that drives a parse. It has static methods parse, parseExpressionAt, and tokenizer that match the top-level functions by the same name.
When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static extend method.
var acorn = require("acorn");
var jsx = require("acorn-jsx");
var JSXParser = acorn.Parser.extend(jsx());
JSXParser.parse("foo(<bar/>)");The extend method takes any number of plugin values, and returns a new Parser class that includes the extra parser logic provided by the plugins.
The bin/acorn utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options:
--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10: Sets the ECMAScript version to parse. Default is version 9.
--module: Sets the parsing mode to "module". Is set to "script" otherwise.
--locations: Attaches a “loc” object to each node with “start” and “end” subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form.
--allow-hash-bang: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.
--compact: No whitespace is used in the AST output.
--silent: Do not output the AST, just return the exit status.
--help: Print the usage information and quit.
The utility spits out the syntax tree as JSON data.
Plugins for ECMAScript proposals:
acorn-stage3: Parse most stage 3 proposals, bundling:
An ESLint parser which leverages TypeScript ESTree to allow for ESLint to lint TypeScript source code.
You can find our Getting Started docs here
These docs walk you through setting up ESLint, this parser, and our plugin. If you know what you’re doing and just want to quick start, read on…
$ yarn add -D typescript @typescript-eslint/parser
$ npm i --save-dev typescript @typescript-eslint/parserIn your ESLint configuration file, set the parser property:
There is sometimes an incorrect assumption that the parser itself is what does everything necessary to facilitate the use of ESLint with TypeScript. In actuality, it is the combination of the parser and one or more plugins which allow you to maximize your usage of ESLint with TypeScript.
For example, once this parser successfully produces an AST for the TypeScript source code, it might well contain some information which simply does not exist in a standard JavaScript context, such as the data for a TypeScript-specific construct, like an interface.
The core rules built into ESLint, such as indent have no knowledge of such constructs, so it is impossible to expect them to work out of the box with them.
Instead, you also need to make use of one more plugins which will add or extend rules with TypeScript-specific features.
By far the most common case will be installing the @typescript-eslint/eslint-plugin plugin, but there are also other relevant options available such a @typescript-eslint/eslint-plugin-tslint.
The following additional configuration options are available by specifying them in parserOptions in your ESLint configuration file.
interface ParserOptions {
ecmaFeatures?: {
jsx?: boolean;
globalReturn?: boolean;
};
ecmaVersion?: number;
jsxPragma?: string;
jsxFragmentName?: string | null;
lib?: string[];
project?: string | string[];
projectFolderIgnoreList?: string[];
tsconfigRootDir?: string;
extraFileExtensions?: string[];
warnOnUnsupportedTypeScriptVersion?: boolean;
}parserOptions.ecmaFeatures.jsxDefault false.
Enable parsing JSX when true. More details can be found here.
NOTE: this setting does not affect known file types (.js, .jsx, .ts, .tsx, .json) because the TypeScript compiler has its own internal handling for known file extensions. The exact behavior is as follows:
parserOptions.project is not provided:
.js, .jsx, .tsx files are parsed as if this is true..ts files are parsed as if this is false..md, .vue) will respect this setting.parserOptions.project is provided (i.e. you are using rules with type information):
.js, .jsx, .tsx files are parsed as if this is true..ts files are parsed as if this is false..md, .vue) are parsed as if this is false.parserOptions.ecmaFeatures.globalReturnDefault false.
This options allows you to tell the parser if you want to allow global return statements in your codebase.
parserOptions.ecmaVersionDefault 2018.
Accepts any valid ECMAScript version number:
Specifies the version of ECMAScript syntax you want to use. This is used by the parser to determine how to perform scope analysis, and it affects the default
parserOptions.jsxPragmaDefault 'React'
The identifier that’s used for JSX Elements creation (after transpilation). If you’re using a library other than React (like preact), then you should change this value.
This should not be a member expression - just the root identifier (i.e. use "React" instead of "React.createElement").
If you provide parserOptions.project, you do not need to set this, as it will automatically detected from the compiler.
parserOptions.jsxFragmentNameDefault null
The identifier that’s used for JSX fragment elements (after transpilation). If null, assumes transpilation will always use a member of the configured jsxPragma. This should not be a member expression - just the root identifier (i.e. use "h" instead of "h.Fragment").
If you provide parserOptions.project, you do not need to set this, as it will automatically detected from the compiler.
parserOptions.libDefault ['es2018']
For valid options, see the TypeScript compiler options.
Specifies the TypeScript libs that are available. This is used by the scope analyser to ensure there are global variables declared for the types exposed by TypeScript.
If you provide parserOptions.project, you do not need to set this, as it will automatically detected from the compiler.
parserOptions.projectDefault undefined.
This option allows you to provide a path to your project’s tsconfig.json. This setting is required if you want to use rules which require type information. Relative paths are interpreted relative to the current working directory if tsconfigRootDir is not set. If you intend on running ESLint from directories other than the project root, you should consider using tsconfigRootDir.
Accepted values:
If you use project references, TypeScript will not automatically use project references to resolve files. This means that you will have to add each referenced tsconfig to the project field either separately, or via a glob.
TypeScript will ignore files with duplicate filenames in the same folder (for example, src/file.ts and src/file.js). TypeScript purposely ignore all but one of the files, only keeping the one file with the highest priority extension (the extension priority order (from highest to lowest) is .ts, .tsx, .d.ts, .js, .jsx). For more info see #955.
Note that if this setting is specified and createDefaultProgram is not, you must only lint files that are included in the projects as defined by the provided tsconfig.json files. If your existing configuration does not include all of the files you would like to lint, you can create a separate tsconfig.eslint.json as follows:
{
// extend your base config so you don't have to redefine your compilerOptions
"extends": "./tsconfig.json",
"include": [
"src/**/*.ts",
"test/**/*.ts",
"typings/**/*.ts",
// etc
// if you have a mixed JS/TS codebase, don't forget to include your JS files
"src/**/*.js"
]
}parserOptions.tsconfigRootDirDefault undefined.
This option allows you to provide the root directory for relative tsconfig paths specified in the project option above.
parserOptions.projectFolderIgnoreListDefault ["**/node_modules/**"].
This option allows you to ignore folders from being included in your provided list of projects. This is useful if you have configured glob patterns, but want to make sure you ignore certain folders.
It accepts an array of globs to exclude from the project globs.
For example, by default it will ensure that a glob like ./**/tsconfig.json will not match any tsconfigs within your node_modules folder (some npm packages do not exclude their source files from their published packages).
parserOptions.extraFileExtensionsDefault undefined.
This option allows you to provide one or more additional file extensions which should be considered in the TypeScript Program compilation. The default extensions are .ts, .tsx, .js, and .jsx. Add extensions starting with ., followed by the file extension. E.g. for a .vue file use "extraFileExtensions: [".vue"].
parserOptions.warnOnUnsupportedTypeScriptVersionDefault true.
This option allows you to toggle the warning that the parser will give you if you use a version of TypeScript which is not explicitly supported
parserOptions.createDefaultProgramDefault false.
This option allows you to request that when the project setting is specified, files will be allowed when not included in the projects defined by the provided tsconfig.json files. Using this option will incur significant performance costs. This option is primarily included for backwards-compatibility. See the project section above for more information.
Please see typescript-eslint for the supported TypeScript version.
Please ensure that you are using a supported version before submitting any issues/bug reports.
Please use the @typescript-eslint/parser issue template when creating your issue and fill out the information requested as best you can. This will really help us when looking into your issue.
See the contributing guide here
Light ECMAScript (JavaScript) Value Notation Levn is a library which allows you to parse a string into a JavaScript value based on an expected type. It is meant for short amounts of human entered data (eg. config files, command line arguments).
How is this different than JSON? levn is meant to be written by humans only, is (due to the previous point) much more concise, can be validated against supplied types, has regex and date literals, and can easily be extended with custom types. On the other hand, it is probably slower and thus less efficient at transporting large amounts of data, which is fine since this is not its purpose.
npm install levn
For updates on levn, follow me on twitter.
var parse = require('levn').parse;
parse('Number', '2'); // 2
parse('String', '2'); // '2'
parse('String', 'levn'); // 'levn'
parse('String', 'a b'); // 'a b'
parse('Boolean', 'true'); // true
parse('Date', '#2011-11-11#'); // (Date object)
parse('Date', '2011-11-11'); // (Date object)
parse('RegExp', '/[a-z]/gi'); // /[a-z]/gi
parse('RegExp', 're'); // /re/
parse('Int', '2'); // 2
parse('Number | String', 'str'); // 'str'
parse('Number | String', '2'); // 2
parse('[Number]', '[1,2,3]'); // [1,2,3]
parse('(String, Boolean)', '(hi, false)'); // ['hi', false]
parse('{a: String, b: Number}', '{a: str, b: 2}'); // {a: 'str', b: 2}
// at the top level, you can ommit surrounding delimiters
parse('[Number]', '1,2,3'); // [1,2,3]
parse('(String, Boolean)', 'hi, false'); // ['hi', false]
parse('{a: String, b: Number}', 'a: str, b: 2'); // {a: 'str', b: 2}
// wildcard - auto choose type
parse('*', '[hi,(null,[42]),{k: true}]'); // ['hi', [null, [42]], {k: true}]require('levn'); returns an object that exposes three properties. VERSION is the current version of the library as a string. parse and parsedTypeParse are functions.
// parse(type, input, options);
parse('[Number]', '1,2,3'); // [1, 2, 3]
// parsedTypeParse(parsedType, input, options);
var parsedType = require('type-check').parseType('[Number]');
parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3]parse casts the string input into a JavaScript value according to the specified type in the type format (and taking account the optional options) and returns the resulting JavaScript value.
String - the type written in the type format which to check againstString - the value written in the levn formatMaybe Object - an optional parameter specifying additional options* - the resulting JavaScript value
parsedTypeParse casts the string input into a JavaScript value according to the specified type which has already been parsed (and taking account the optional options) and returns the resulting JavaScript value. You can parse a type using the type-check library’s parseType function.
Object - the type in the parsed type format which to check againstString - the value written in the levn formatMaybe Object - an optional parameter specifying additional options* - the resulting JavaScript value
var parsedType = require('type-check').parseType('[Number]');
parsedTypeParse(parsedType, '1,2,3'); // [1, 2, 3]Levn can use the type information you provide to choose the appropriate value to produce from the input. For the same input, it will choose a different output value depending on the type provided. For example, parse('Number', '2') will produce the number 2, but parse('String', '2') will produce the string "2".
If you do not provide type information, and simply use *, levn will parse the input according the unambiguous “explicit” mode, which we will now detail - you can also set the explicit option to true manually in the options.
"string", 'string' are parsed as a String, eg. "a msg" is "a msg"#date# is parsed as a Date, eg. #2011-11-11# is new Date('2011-11-11')/regexp/flags is parsed as a RegExp, eg. /re/gi is /re/giundefined, null, NaN, true, and false are all their JavaScript equivalents[element1, element2, etc] is an Array, and the casting procedure is recursively applied to each element. Eg. [1,2,3] is [1,2,3].(element1, element2, etc) is an tuple, and the casting procedure is recursively applied to each element. Eg. (1, a) is (1, a) (is [1, 'a']).{key1: val1, key2: val2, ...} is an Object, and the casting procedure is recursively applied to each property. Eg. {a: 1, b: 2} is {a: 1, b: 2}.[``]``(``)``{``}``:``,) is a string, eg. $12- blah is "$12- blah".If you do provide type information, you can make your input more concise as the program already has some information about what it expects. Please see the type format section of type-check for more information about how to specify types. There are some rules about what levn can do with the information:
[({})] is "[({})]", and "hi" is '"hi"'.# can be omitted from date literals. Eg. 2011-11-11 is new Date('2011-11-11')./ can be omitted - this will have the affect of setting the source of the regex to the input. Eg. regex is /regex/.[ and closing ] can be omitted. Eg. 1,2,3 is [1,2,3].( and closing ) can be omitted. Eg. 1, a is (1, a) (is [1, 'a']).{ and closing } can be omitted. Eg a: 1, b: 2 is {a: 1, b: 2}.If you list multiple types (eg. Number | String), it will first attempt to cast to the first type and then validate - if the validation fails it will move on to the next type and so forth, left to right. You must be careful as some types will succeed with any input, such as String. Thus put String at the end of your list. In non-explicit mode, Date and RegExp will succeed with a large variety of input - also be careful with these and list them near the end if not last in your list.
Whitespace between special characters and elements is inconsequential.
Options is an object. It is an optional parameter to the parse and parsedTypeParse functions.
A Boolean. By default it is false.
Example:
parse('RegExp', 're', {explicit: false}); // /re/
parse('RegExp', 're', {explicit: true}); // Error: ... does not type check...
parse('RegExp | String', 're', {explicit: true}); // 're'explicit sets whether to be in explicit mode or not. Using * automatically activates explicit mode. For more information, read the levn format section.
An Object. Empty {} by default.
Example:
var options = {
customTypes: {
Even: {
typeOf: 'Number',
validate: function (x) {
return x % 2 === 0;
},
cast: function (x) {
return {type: 'Just', value: parseInt(x)};
}
}
}
}
parse('Even', '2', options); // 2
parse('Even', '3', options); // Error: Value: "3" does not type check...Another Example:
function Person(name, age){
this.name = name;
this.age = age;
}
var options = {
customTypes: {
Person: {
typeOf: 'Object',
validate: function (x) {
x instanceof Person;
},
cast: function (value, options, typesCast) {
var name, age;
if ({}.toString.call(value).slice(8, -1) !== 'Object') {
return {type: 'Nothing'};
}
name = typesCast(value.name, [{type: 'String'}], options);
age = typesCast(value.age, [{type: 'Numger'}], options);
return {type: 'Just', value: new Person(name, age)};
}
}
}
parse('Person', '{name: Laura, age: 25}', options); // Person {name: 'Laura', age: 25}customTypes is an object whose keys are the name of the types, and whose values are an object with three properties, typeOf, validate, and cast. For more information about typeOf and validate, please see the custom types section of type-check.
cast is a function which receives three arguments, the value under question, options, and the typesCast function. In cast, attempt to cast the value into the specified type. If you are successful, return an object in the format {type: 'Just', value: CAST-VALUE}, if you know it won’t work, return {type: 'Nothing'}. You can use the typesCast function to cast any child values. Remember to pass options to it. In your function you can also check for options.explicit and act accordingly.
levn is written in LiveScript - a language that compiles to JavaScript. It uses type-check to both parse types and validate values. It also uses the prelude.ls library.
A tiny, fast JavaScript parser written in JavaScript.
You are welcome to report bugs or create pull requests on github. For questions and discussion, please use the Tern discussion forum.
The easiest way to install acorn is from npm:
Alternately, you can download the source and build acorn yourself:
parse(input, options) is the main interface to the library. The input parameter is a string, options must be an object setting some of the options listed below. The return value will be an abstract syntax tree object as specified by the ESTree spec.
When encountering a syntax error, the parser will raise a SyntaxError object with a meaningful message. The error object will have a pos property that indicates the string offset at which the error occurred, and a loc object that contains a {line, column} object referring to that same position.
Options are provided by in a second argument, which should be an object containing any of these fields (only ecmaVersion is required):
ecmaVersion: Indicates the ECMAScript version to parse. Must be either 3, 5, 6 (or 2015), 7 (2016), 8 (2017), 9 (2018), 10 (2019), 11 (2020), or 12 (2021, partial support), or "latest" (the latest the library supports). This influences support for strict mode, the set of reserved words, and support for new syntax features.
NOTE: Only ‘stage 4’ (finalized) ECMAScript features are being implemented by Acorn. Other proposed new features must be implemented through plugins.
sourceType: Indicate the mode the code should be parsed in. Can be either "script" or "module". This influences global strict mode and parsing of import and export declarations.
NOTE: If set to "module", then static import / export syntax will be valid, even if ecmaVersion is less than 6.
onInsertedSemicolon: If given a callback, that callback will be called whenever a missing semicolon is inserted by the parser. The callback will be given the character offset of the point where the semicolon is inserted as argument, and if locations is on, also a {line, column} object representing this position.
onTrailingComma: Like onInsertedSemicolon, but for trailing commas.
allowReserved: If false, using a reserved word will generate an error. Defaults to true for ecmaVersion 3, false for higher versions. When given the value "never", reserved words and keywords can also not be used as property names (as in Internet Explorer’s old parser).
allowReturnOutsideFunction: By default, a return statement at the top level raises an error. Set this to true to accept such code.
allowImportExportEverywhere: By default, import and export declarations can only appear at a program’s top level. Setting this option to true allows them anywhere where a statement is allowed.
allowAwaitOutsideFunction: By default, await expressions can only appear inside async functions. Setting this option to true allows to have top-level await expressions. They are still not allowed in non-async functions, though.
allowHashBang: When this is enabled (off by default), if the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.
locations: When true, each node has a loc object attached with start and end subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form. Default is false.
onToken: If a function is passed for this option, each found token will be passed in same format as tokens returned from tokenizer().getToken().
If array is passed, each found token is pushed to it.
Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.
onComment: If a function is passed for this option, whenever a comment is encountered the function will be called with the following parameters:
block: true if the comment is a block comment, false if it is a line comment.text: The content of the comment.start: Character offset of the start of the comment.end: Character offset of the end of the comment.When the locations options is on, the {line, column} locations of the comment’s start and end are passed as two additional parameters.
If array is passed for this option, each found comment is pushed to it as object in Esprima format:
{
"type": "Line" | "Block",
"value": "comment text",
"start": Number,
"end": Number,
// If `locations` option is on:
"loc": {
"start": {line: Number, column: Number}
"end": {line: Number, column: Number}
},
// If `ranges` option is on:
"range": [Number, Number]
}Note that you are not allowed to call the parser from the callback—that will corrupt its internal state.
ranges: Nodes have their start and end characters offsets recorded in start and end properties (directly on the node, rather than the loc object, which holds line/column data. To also add a semi-standardized range property holding a [start, end] array with the same numbers, set the ranges option to true.
program: It is possible to parse multiple files into a single AST by passing the tree produced by parsing the first file as the program option in subsequent parses. This will add the toplevel forms of the parsed file to the “Program” (top) node of an existing parse tree.
sourceFile: When the locations option is true, you can pass this option to add a source attribute in every node’s loc object. Note that the contents of this option are not examined or processed in any way; you are free to use whatever format you choose.
directSourceFile: Like sourceFile, but a sourceFile property will be added (regardless of the location option) directly to the nodes, rather than the loc object.
preserveParens: If this option is true, parenthesized expressions are represented by (non-standard) ParenthesizedExpression nodes that have a single expression property containing the expression inside parentheses.
parseExpressionAt(input, offset, options) will parse a single expression in a string, and return its AST. It will not complain if there is more of the string left after the expression.
tokenizer(input, options) returns an object with a getToken method that can be called repeatedly to get the next token, a {start, end, type, value} object (with added loc property when the locations option is enabled and range property when the ranges option is enabled). When the token’s type is tokTypes.eof, you should stop calling the method, since it will keep returning that same token forever.
In ES6 environment, returned result can be used as any other protocol-compliant iterable:
for (let token of acorn.tokenizer(str)) {
// iterate over the tokens
}
// transform code to array of tokens:
var tokens = [...acorn.tokenizer(str)];tokTypes holds an object mapping names to the token type objects that end up in the type properties of tokens.
getLineInfo(input, offset) can be used to get a {line, column} object for a given program string and offset.
Parser classInstances of the Parser class contain all the state and logic that drives a parse. It has static methods parse, parseExpressionAt, and tokenizer that match the top-level functions by the same name.
When extending the parser with plugins, you need to call these methods on the extended version of the class. To extend a parser with plugins, you can use its static extend method.
var acorn = require("acorn");
var jsx = require("acorn-jsx");
var JSXParser = acorn.Parser.extend(jsx());
JSXParser.parse("foo(<bar/>)", {ecmaVersion: 2020});The extend method takes any number of plugin values, and returns a new Parser class that includes the extra parser logic provided by the plugins.
The bin/acorn utility can be used to parse a file from the command line. It accepts as arguments its input file and the following options:
--ecma3|--ecma5|--ecma6|--ecma7|--ecma8|--ecma9|--ecma10: Sets the ECMAScript version to parse. Default is version 9.
--module: Sets the parsing mode to "module". Is set to "script" otherwise.
--locations: Attaches a “loc” object to each node with “start” and “end” subobjects, each of which contains the one-based line and zero-based column numbers in {line, column} form.
--allow-hash-bang: If the code starts with the characters #! (as in a shellscript), the first line will be treated as a comment.
--compact: No whitespace is used in the AST output.
--silent: Do not output the AST, just return the exit status.
--help: Print the usage information and quit.
The utility spits out the syntax tree as JSON data.
Plugins for ECMAScript proposals:
acorn-stage3: Parse most stage 3 proposals, bundling:

A JavaScript library for arbitrary-precision decimal and non-decimal arithmetic.
toExponential, toFixed, toPrecision and toString methods of JavaScript’s Number typetoFraction and a correctly-rounded squareRoot method
If a smaller and simpler library is required see big.js. It’s less than half the size but only works with decimal numbers and only has half the methods. It also does not allow NaN or Infinity, or have the configuration options of this library.
See also decimal.js, which among other things adds support for non-integer powers, and performs all operations to a specified number of significant digits.
The library is the single JavaScript file bignumber.js or ES module bignumber.mjs.
ES module
ES module
The library exports a single constructor function, BigNumber, which accepts a value of type Number, String or BigNumber,
let x = new BigNumber(123.4567);
let y = BigNumber('123456.7e-3');
let z = new BigNumber(x);
x.isEqualTo(y) && y.isEqualTo(z) && x.isEqualTo(z); // trueTo get the string value of a BigNumber use toString() or toFixed(). Using toFixed() prevents exponential notation being returned, no matter how large or small the value.
let x = new BigNumber('1111222233334444555566');
x.toString(); // "1.111222233334444555566e+21"
x.toFixed(); // "1111222233334444555566"If the limited precision of Number values is not well understood, it is recommended to create BigNumbers from String values rather than Number values to avoid a potential loss of precision.
In all further examples below, let, semicolons and toString calls are not shown. If a commented-out value is in quotes it means toString has been called on the preceding expression.
// Precision loss from using numeric literals with more than 15 significant digits.
new BigNumber(1.0000000000000001) // '1'
new BigNumber(88259496234518.57) // '88259496234518.56'
new BigNumber(99999999999999999999) // '100000000000000000000'
// Precision loss from using numeric literals outside the range of Number values.
new BigNumber(2e+308) // 'Infinity'
new BigNumber(1e-324) // '0'
// Precision loss from the unexpected result of arithmetic with Number values.
new BigNumber(0.7 + 0.1) // '0.7999999999999999'When creating a BigNumber from a Number, note that a BigNumber is created from a Number’s decimal toString() value not from its underlying binary value. If the latter is required, then pass the Number’s toString(2) value and specify base 2.
BigNumbers can be created from values in bases from 2 to 36. See ALPHABET to extend this range.
a = new BigNumber(1011, 2) // "11"
b = new BigNumber('zz.9', 36) // "1295.25"
c = a.plus(b) // "1306.25"Performance is better if base 10 is NOT specified for decimal values. Only specify base 10 when it is desired that the number of decimal places of the input value be limited to the current DECIMAL_PLACES setting.
A BigNumber is immutable in the sense that it is not changed by its methods.
The methods that return a BigNumber can be chained.
x.dividedBy(y).plus(z).times(9)
x.times('1.23456780123456789e+9').plus(9876.5432321).dividedBy('4444562598.111772').integerValue()Some of the longer method names have a shorter alias.
x.squareRoot().dividedBy(y).exponentiatedBy(3).isEqualTo(x.sqrt().div(y).pow(3)) // true
x.modulo(y).multipliedBy(z).eq(x.mod(y).times(z)) // trueAs with JavaScript’s Number type, there are toExponential, toFixed and toPrecision methods.
x = new BigNumber(255.5)
x.toExponential(5) // "2.55500e+2"
x.toFixed(5) // "255.50000"
x.toPrecision(5) // "255.50"
x.toNumber() // 255.5A base can be specified for toString.
Performance is better if base 10 is NOT specified, i.e. use toString() not toString(10). Only specify base 10 when it is desired that the number of decimal places be limited to the current DECIMAL_PLACES setting.
There is a toFormat method which may be useful for internationalisation.
The maximum number of decimal places of the result of an operation involving division (i.e. a division, square root, base conversion or negative power operation) is set using the set or config method of the BigNumber constructor.
The other arithmetic operations always give the exact result.
BigNumber.set({ DECIMAL_PLACES: 10, ROUNDING_MODE: 4 })
x = new BigNumber(2)
y = new BigNumber(3)
z = x.dividedBy(y) // "0.6666666667"
z.squareRoot() // "0.8164965809"
z.exponentiatedBy(-3) // "3.3749999995"
z.toString(2) // "0.1010101011"
z.multipliedBy(z) // "0.44444444448888888889"
z.multipliedBy(z).decimalPlaces(10) // "0.4444444445"There is a toFraction method with an optional maximum denominator argument
y = new BigNumber(355)
pi = y.dividedBy(113) // "3.1415929204"
pi.toFraction() // [ "7853982301", "2500000000" ]
pi.toFraction(1000) // [ "355", "113" ]and isNaN and isFinite methods, as NaN and Infinity are valid BigNumber values.
x = new BigNumber(NaN) // "NaN"
y = new BigNumber(Infinity) // "Infinity"
x.isNaN() && !y.isNaN() && !x.isFinite() && !y.isFinite() // trueThe value of a BigNumber is stored in a decimal floating point format in terms of a coefficient, exponent and sign.
x = new BigNumber(-123.456);
x.c // [ 123, 45600000000000 ] coefficient (i.e. significand)
x.e // 2 exponent
x.s // -1 signFor advanced usage, multiple BigNumber constructors can be created, each with their own independent configuration.
// Set DECIMAL_PLACES for the original BigNumber constructor
BigNumber.set({ DECIMAL_PLACES: 10 })
// Create another BigNumber constructor, optionally passing in a configuration object
BN = BigNumber.clone({ DECIMAL_PLACES: 5 })
x = new BigNumber(1)
y = new BN(1)
x.div(3) // '0.3333333333'
y.div(3) // '0.33333'To avoid having to call toString or valueOf on a BigNumber to get its value in the Node.js REPL or when using console.log use
For further information see the API reference in the doc directory.
The test/modules directory contains the test scripts for each method.
The tests can be run with Node.js or a browser. For Node.js use
npm test
or
$ node test/test
To test a single method, use, for example
$ node test/methods/toFraction
For the browser, open test/test.html.
For Node, if uglify-js is installed
npm install uglify-js -g
then
npm run build
will create bignumber.min.js.
A source map will also be created in the root directory.
See LICENCE.
This module provides miscellaneous facilities for working with strings, numbers, dates, and objects and arrays of these basic types.
Creates a deep copy of a primitive type, object, or array of primitive types.
Returns whether two objects are equal.
Returns true if the given object has no properties and false otherwise. This is O(1) (unlike Object.keys(obj).length === 0, which is O(N)).
Returns true if the given object has an enumerable, non-inherited property called key. For information on enumerability and ownership of properties, see the MDN documentation.
Like Array.forEach, but iterates enumerable, owned properties of an object rather than elements of an array. Equivalent to:
for (var key in obj) {
if (Object.prototype.hasOwnProperty.call(obj, key)) {
callback(key, obj[key]);
}
}
Flattens an object up to a given level of nesting, returning an array of arrays of length “depth + 1”, where the first “depth” elements correspond to flattened columns and the last element contains the remaining object . For example:
flattenObject({
'I': {
'A': {
'i': {
'datum1': [ 1, 2 ],
'datum2': [ 3, 4 ]
},
'ii': {
'datum1': [ 3, 4 ]
}
},
'B': {
'i': {
'datum1': [ 5, 6 ]
},
'ii': {
'datum1': [ 7, 8 ],
'datum2': [ 3, 4 ],
},
'iii': {
}
}
},
'II': {
'A': {
'i': {
'datum1': [ 1, 2 ],
'datum2': [ 3, 4 ]
}
}
}
}, 3)
becomes:
[
[ 'I', 'A', 'i', { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } ],
[ 'I', 'A', 'ii', { 'datum1': [ 3, 4 ] } ],
[ 'I', 'B', 'i', { 'datum1': [ 5, 6 ] } ],
[ 'I', 'B', 'ii', { 'datum1': [ 7, 8 ], 'datum2': [ 3, 4 ] } ],
[ 'I', 'B', 'iii', {} ],
[ 'II', 'A', 'i', { 'datum1': [ 1, 2 ], 'datum2': [ 3, 4 ] } ]
]
This function is strict: “depth” must be a non-negative integer and “obj” must be a non-null object with at least “depth” levels of nesting under all keys.
This is similar to flattenObject except that instead of returning an array, this function invokes func(entry) for each entry in the array that flattenObject would return. flattenIter(obj, depth, func) is logically equivalent to flattenObject(obj, depth).forEach(func). Importantly, this version never constructs the full array. Its memory usage is O(depth) rather than O(n) (where n is the number of flattened elements).
There’s another difference between flattenObject and flattenIter that’s related to the special case where depth === 0. In this case, flattenObject omits the array wrapping obj (which is regrettable).
Fetch nested property “key” from object “obj”, traversing objects as needed. For example, pluck(obj, "foo.bar.baz") is roughly equivalent to obj.foo.bar.baz, except that:
pluck({}, "foo.bar") is just undefined.pluck({ 'foo.bar': 1 }, 'foo.bar') is 1, not undefined. This is also true recursively, so pluck({ 'a': { 'foo.bar': 1 } }, 'a.foo.bar') is also 1, not undefined.Returns an element from “array” selected uniformly at random. If “array” is empty, throws an Error.
Returns true if the given string starts with the given prefix and false otherwise.
Returns true if the given string ends with the given suffix and false otherwise.
Parses the contents of str (a string) as an integer. On success, the integer value is returned (as a number). On failure, an error is returned describing why parsing failed.
By default, leading and trailing whitespace characters are not allowed, nor are trailing characters that are not part of the numeric representation. This behaviour can be toggled by using the options below. The empty string ('') is not considered valid input. If the return value cannot be precisely represented as a number (i.e., is smaller than Number.MIN_SAFE_INTEGER or larger than Number.MAX_SAFE_INTEGER), an error is returned. Additionally, the string '-0' will be parsed as the integer 0, instead of as the IEEE floating point value -0.
This function accepts both upper and lowercase characters for digits, similar to parseInt(), Number(), and strtol(3C).
The following may be specified in options:
| Option | Type | Default | Meaning |
|---|---|---|---|
| base | number | 10 | numeric base (radix) to use, in the range 2 to 36 |
| allowSign | boolean | true | whether to interpret any leading + (positive) and - (negative) characters |
| allowImprecise | boolean | false | whether to accept values that may have lost precision (past MAX_SAFE_INTEGER or below MIN_SAFE_INTEGER) |
| allowPrefix | boolean | false | whether to interpret the prefixes 0b (base 2), 0o (base 8), 0t (base 10), or 0x (base 16) |
| allowTrailing | boolean | false | whether to ignore trailing characters |
| trimWhitespace | boolean | false | whether to trim any leading or trailing whitespace/line terminators |
| leadingZeroIsOctal | boolean | false | whether a leading zero indicates octal |
Note that if base is unspecified, and allowPrefix or leadingZeroIsOctal are, then the leading characters can change the default base from 10. If base is explicitly specified and allowPrefix is true, then the prefix will only be accepted if it matches the specified base. base and leadingZeroIsOctal cannot be used together.
Context: It’s tricky to parse integers with JavaScript’s built-in facilities for several reasons:
parseInt() and Number() by default allow the base to be specified in the input string by a prefix (e.g., 0x for hex).parseInt() allows trailing nonnumeric characters.Number(str) returns 0 when str is the empty string ('').parseInt('9007199254740993') returns 9007199254740992.- and + signs before the digit.While each of these may be desirable in some contexts, there are also times when none of them are wanted. parseInteger() grants greater control over what input’s permissible.
Converts a Date object to an ISO8601 date string of the form “YYYY-MM-DDTHH:MM:SS.sssZ”. This format is not customizable.
Parses a date expressed as a string, as either a number of milliseconds since the epoch or any string format that Date accepts, giving preference to the former where these two sets overlap (e.g., strings containing small numbers).
Add two hrtime intervals (as from Node’s process.hrtime()), returning a new hrtime interval array. This function does not modify either input argument.
Add two hrtime intervals (as from Node’s process.hrtime()), storing the result in timeA. This function overwrites (and returns) the first argument passed in.
This suite of functions converts a hrtime interval (as from Node’s process.hrtime()) into a scalar number of nanoseconds, microseconds or milliseconds. Results are truncated, as with Math.floor().
Uses JSON validation (via JSV) to validate the given object against the given schema. On success, returns null. On failure, returns (does not throw) a useful Error object.
Check an object for unexpected properties. Accepts the object to check, and an array of allowed property name strings. If extra properties are detected, an array of extra property names is returned. If no properties other than those in the allowed list are present on the object, the returned array will be of zero length.
Merge properties from objects “provided”, “overrides”, and “defaults”. The intended use case is for functions that accept named arguments in an “args” object, but want to provide some default values and override other values. In that case, “provided” is what the caller specified, “overrides” are what the function wants to override, and “defaults” contains default values.
The function starts with the values in “defaults”, overrides them with the values in “provided”, and then overrides those with the values in “overrides”. For convenience, any of these objects may be falsey, in which case they will be ignored. The input objects are never modified, but properties in the returned object are not deep-copied.
For example:
mergeObjects(undefined, { 'objectMode': true }, { 'highWaterMark': 0 })
returns:
{ 'objectMode': true, 'highWaterMark': 0 }
For another example:
mergeObjects(
{ 'highWaterMark': 16, 'objectMode': 7 }, /* from caller */
{ 'objectMode': true }, /* overrides */
{ 'highWaterMark': 0 }); /* default */
returns:
{ 'objectMode': true, 'highWaterMark': 16 }
See separate contribution guidelines.
extends javascript ES6 Set class and implements new functions in it.
It extends ES6 Set class so it already has all the Set functionality.
https://developer.mozilla.org/en-US/docs/Web/JavaScript/Reference/Global_Objects/Set
constructor accepts an optional array of elements same like Set.
const set1 = new EnhancedSet(['A', 'B', 'C', 'D']);
const set2 = new EnhancedSet(['C', 'D', 'E', 'F']);applies union with another set and returns a set with all elements of the two.
https://en.wikipedia.org/wiki/Union_(set_theory)

| params | |
|---|---|
| name | type |
| set | Set |
| runtime | explanation |
|---|---|
| O(n+m) |
n = number of elements of the first set m = number of elements of the second set |
| return |
|---|
| EnhancedSet |
intersects the set with another set and returns a set with existing elements in both sets.
https://en.wikipedia.org/wiki/Intersection_(set_theory)

| params | |
|---|---|
| name | type |
| set | Set |
| runtime | explanation |
|---|---|
| O(n) | n = number of elements of the set |
| return |
|---|
| EnhancedSet |
returns elements in a set and not in the other set relative to their union.
https://en.wikipedia.org/wiki/Complement_(set_theory)

| return |
|---|
| EnhancedSet |
console.log(set1.complement(set2)); // EnhancedSet { 'A', 'B' }
console.log(set2.complement(set1)); // EnhancedSet { 'E', 'F' }checks if the set is a subset of another set and returns true if all elements of the set exist in the other set.
https://en.wikipedia.org/wiki/Subset

| params | |
|---|---|
| name | type |
| set | Set |
| runtime | explanation |
|---|---|
| O(n) | n = number of elements of the set |
| return |
|---|
| boolean |
console.log(set1.isSubsetOf(new Set(['A', 'B', 'C', 'D', 'E']))); // true
console.log(set1.isSubsetOf(set2)); // falsechecks if the set is a superset of another set and returns true if all elements of the other set exist in the set.
https://en.wikipedia.org/wiki/Subset

| params | |
|---|---|
| name | type |
| set | Set |
| runtime | explanation |
|---|---|
| O(n) | n = number of elements of the set |
console.log(set1.isSupersetOf(new Set(['A', 'B']))); // true
console.log(set1.isSupersetOf(set2)); // falseapplies cartesian product between two sets. Default separator is empty string ’’.
https://en.wikipedia.org/wiki/Cartesian_product

| params | |
|---|---|
| name | type |
| set | Set |
| separator | string |
| runtime | explanation |
|---|---|
| O(n*m) |
n = number of elements of the first set m = number of elements of the second set |
| return |
|---|
| EnhancedSet |
console.log(set1.product(set2));
/*
EnhancedSet {
'AC',
'AD',
'AE',
'AF',
'BC',
'BD',
'BE',
'BF',
'CC',
'CD',
'CE',
'CF',
'DC',
'DD',
'DE',
'DF'
}
*/
console.log(set1.product(set2, ','));
/*
EnhancedSet {
'A,C',
'A,D',
'A,E',
'A,F',
'B,C',
'B,D',
'B,E',
'B,F',
'C,C',
'C,D',
'C,E',
'C,F',
'D,C',
'D,D',
'D,E',
'D,F'
}
*/applies cartesian product on the set itself. It projects the power concept on sets and also accepts a separator with default empty string value ’’.

| params | |
|---|---|
| name | type |
| m | number |
| separator | string |
| runtime | explanation |
|---|---|
| O(n^m) |
n = number of elements of the set m = the multiplication power number |
| return |
|---|
| EnhancedSet |
const x = new EnhancedSet(['A', 'B']);
const y = x.power(2);
console.log(y);
/*
EnhancedSet(4) [Set] {
'AA',
'AB',
'BA',
'BB'
}
*/
const z = y.power(2);
console.log(z);
/*
EnhancedSet(16) [Set] {
'AAAA',
'AAAB',
'AABA',
'AABB',
'ABAA',
'ABAB',
'ABBA',
'ABBB',
'BAAA',
'BAAB',
'BABA',
'BABB',
'BBAA',
'BBAB',
'BBBA',
'BBBB'
}
*/generates m permutations from the set elements. It also accepts a separator with default empty string value ’’.

| params | |
|---|---|
| name | type |
| m | number |
| separator | string |
| runtime | explanation |
|---|---|
| O(n^m) |
n = number of elements of the set m = the multiplication power number |
| return |
|---|
| EnhancedSet |
const x = new EnhancedSet(['A', 'B', 'C', 'D']);
const y = x.permutations(2);
console.log(y);
/*
EnhancedSet(12) [Set] {
'AB',
'AC',
'AD',
'BA',
'BC',
'BD',
'CA',
'CB',
'CD',
'DA',
'DB',
'DC'
}
*/checks if two sets are equal.
| params | |
|---|---|
| name | type |
| set | Set |
| runtime | explanation |
|---|---|
| O(n) | n = number of elements of the set |
| return |
|---|
| boolean |
console.log(set1.equals(new Set(['B', 'A', 'D', 'C']))); // true
console.log(set1.equals(new EnhancedSet(['D', 'C']))); // falsefilters the set based on a callback and returns the filtered set.
| params | |
|---|---|
| name | type |
| cb | function |
| runtime | explanation |
|---|---|
| O(n) | n = number of elements of the set |
| return |
|---|
| EnhancedSet |
converts the set into an array.
| return |
|---|
| array |
clones the set.
| return |
|---|
| EnhancedSet |
grunt build
Is a faster
node-globalternative.
['*', '!*.md']).!**/node_modules/**).fs.Stats for matched path if you wanted.If you want to thank me, or promote your Issue.
Sorry, but I have work and support for packages requires some time after work. I will be glad of your support and PR’s.
npm install --save fast-glob
const fg = require('fast-glob');
fg(['src/**/*.js', '!src/**/*.spec.js']).then((entries) => console.log(entries));
fg.async(['src/**/*.js', '!src/**/*.spec.js']).then((entries) => console.log(entries));const fg = require('fast-glob');
const entries = fg.sync(['src/**/*.js', '!src/**/*.spec.js']);
console.log(entries);const fg = require('fast-glob');
const stream = fg.stream(['src/**/*.js', '!src/**/*.spec.js']);
const entries = [];
stream.on('data', (entry) => entries.push(entry));
stream.once('error', console.log);
stream.once('end', () => console.log(entries));Returns a Promise with an array of matching entries.
Returns an array of matching entries.
Returns a ReadableStream when the data event will be emitted with Entry.
string|string[]This package does not respect the order of patterns. First, all the negative patterns are applied, and only then the positive patterns.
ObjectSee options section for more detailed information.
Return a set of tasks based on provided patterns. All tasks satisfy the Task interface:
interface Task {
/**
* Parent directory for all patterns inside this task.
*/
base: string;
/**
* Dynamic or static patterns are in this task.
*/
dynamic: boolean;
/**
* All patterns.
*/
patterns: string[];
/**
* Only positive patterns.
*/
positive: string[];
/**
* Only negative patterns without ! symbol.
*/
negative: string[];
}The entry which can be a string if the stats option is disabled, otherwise fs.Stats with two additional path and depth properties.
string
process.cwd()The current working directory in which to search.
number|boolean
trueThe deep option can be set to true to traverse the entire directory structure, or it can be set to a number to only traverse that many levels deep.
For example, you have the following tree:
test
└── one
└── two
└── index.js
:book: If you specify a pattern with some base directory, this directory will not participate in the calculation of the depth of the found directories. Think of it as a
cwdoption.
fg('test/**', { onlyFiles: false, deep: 0 });
// -> ['test/one']
fg('test/**', { onlyFiles: false, deep: 1 });
// -> ['test/one', 'test/one/two']
fg('**', { onlyFiles: false, cwd: 'test', deep: 0 });
// -> ['one']
fg('**', { onlyFiles: false, cwd: 'test', deep: 1 });
// -> ['one', 'one/two']string[]
[]An array of glob patterns to exclude matches.
boolean
falseAllow patterns to match filenames starting with a period (files & directories), even if the pattern does not explicitly have a period in that spot.
boolean
falseReturn fs.Stats with two additional path and depth properties instead of a string.
boolean
trueReturn only files.
boolean
falseReturn only directories.
boolean
trueFollow symlinked directories when expanding ** patterns.
boolean
truePrevent duplicate results.
boolean
falseAdd a / character to directory entries.
boolean
falseReturn absolute paths for matched entries.
:book: Note that you need to use this option if you want to use absolute negative patterns like
${__dirname}/*.md.
boolean
falseDisable expansion of brace patterns ({a,b}, {1..3}).
boolean
trueThe nobrace option without double-negation. This option has a higher priority then nobrace.
boolean
falseDisable matching with globstars (**).
boolean
trueThe noglobstar option without double-negation. This option has a higher priority then noglobstar.
boolean
falseDisable extglob support (patterns like +(a|b)), so that extglobs are regarded as literal characters.
boolean
trueThe noext option without double-negation. This option has a higher priority then noext.
boolean
falseDisable a case-sensitive mode for matching files.
test/file.md, test/File.mdtest/file.* pattern (false): test/file.mdtest/file.* pattern (true): test/file.md, test/File.mdboolean
trueThe nocase option without double-negation. This option has a higher priority then nocase.
boolean
falseAllow glob patterns without slashes to match a file path based on its basename. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123.
Function
nullAllows you to transform a path or fs.Stats object before sending to the array.
const fg = require('fast-glob');
const entries1 = fg.sync(['**/*.scss']);
const entries2 = fg.sync(['**/*.scss'], { transform: (entry) => '_' + entry });
console.log(entries1); // ['a.scss', 'b.scss']
console.log(entries2); // ['_a.scss', '_b.scss']If you are using TypeScript, you probably want to specify your own type of the returned array.
import * as fg from 'fast-glob';
interface IMyOwnEntry {
path: string;
}
const entries: IMyOwnEntry[] = fg.sync<IMyOwnEntry>(['*.md'], {
transform: (entry) => typeof entry === 'string' ? { path: entry } : { path: entry.path }
// Will throw compilation error for non-IMyOwnEntry types (boolean, for example)
});You can use a negative pattern like this: !**/node_modules or !**/node_modules/**. Also you can use ignore option. Just look at the example below.
first/
├── file.md
└── second
└── file.txt
If you don’t want to read the second directory, you must write the following pattern: !**/second or !**/second/**.
fg.sync(['**/*.md', '!**/second']); // ['first/file.txt']
fg.sync(['**/*.md'], { ignore: '**/second/**' }); // ['first/file.txt']:warning: When you write
!**/second/**/*it means that the directory will be read, but all the entries will not be included in the results.
You have to understand that if you write the pattern to exclude directories, then the directory will not be read under any circumstances.
You cannot use UNC paths as patterns (due to syntax), but you can use them as cwd directory.
fg.sync('*', { cwd: '\\\\?\\C:\\Python27' /* or //?/C:/Python27 */ });
fg.sync('Python27/*', { cwd: '\\\\?\\C:\\' /* or //?/C:/ */ });node-glob?Not fully, because fast-glob does not implement all options of node-glob. See table below.
| node-glob | fast-glob |
|---|---|
cwd |
cwd |
root |
– |
dot |
dot |
nomount |
– |
mark |
markDirectories |
nosort |
– |
nounique |
unique |
nobrace |
nobrace or brace |
noglobstar |
noglobstar or globstar |
noext |
noext or extension |
nocase |
nocase or case |
matchBase |
matchbase |
nodir |
onlyFiles |
ignore |
ignore |
follow |
followSymlinkedDirectories |
realpath |
– |
absolute |
absolute |
Tech specs:
Server: Vultr Bare Metal
You can see results here for latest release.
fs.readdir().
See the Releases section of our GitHub project for changelogs for each release version.
| Linux | OS X | Windows | Coverage | Downloads |
|---|---|---|---|---|
|
|
|
|
|
|
ignore is a manager, filter and parser which implemented in pure JavaScript according to the .gitignore spec 2.22.1.
ignore is used by eslint, gitbook and many others.
Pay ATTENTION that minimatch (which used by fstream-ignore) does not follow the gitignore spec.
To filter filenames according to a .gitignore file, I recommend this npm package, ignore.
To parse an .npmignore file, you should use minimatch, because an .npmignore file is parsed by npm using minimatch and it does not work in the .gitignore way.
ignore is fully tested, and has more than five hundreds of unit tests.
0.8 - 7.x0.10 - 7.x, node < 0.10 is not tested due to the lack of support of appveyor.Actually, ignore does not rely on any versions of node specially.
Since 4.0.0, ignore will no longer support node < 6 by default, to use in node < 6, require('ignore/legacy'). For details, see CHANGELOG.
Pathname Conventionsglob-gitignore matches files using patterns and filters them according to gitignore rules.const paths = [
'.abc/a.js', // filtered out
'.abc/d/e.js' // included
]
ig.filter(paths) // ['.abc/d/e.js']
ig.ignores('.abc/a.js') // trueig.filter(['.abc\\a.js', '.abc\\d\\e.js'])
// if the code above runs on windows, the result will be
// ['.abc\\d\\e.js']ignore is a standalone module, and is much simpler so that it could easy work with other programs, unlike isaacs’s fstream-ignore which must work with the modules of the fstream family.
ignore only contains utility methods to filter paths according to the specified ignore rules, so
ignore never try to find out ignore rules by traversing directories or fetching from git configurations.ignore don’t cares about sub-modules of git projects./*.js’ should only match ‘a.js’, but not ‘abc/a.js’.**/foo’ should match ‘foo’ anywhere.'a '(one space) should not match 'a '(two spaces).'a \ ' matches 'a 'git check-ignore.String | Ignore An ignore pattern string, or the Ignore instanceArray<String | Ignore> Array of ignore patterns.Adds a rule or several rules to the current manager.
Returns this
Notice that a line starting with '#'(hash) is treated as a comment. Put a backslash (\) in front of the first hash for patterns that begin with a hash, if you want to ignore a file with a hash at the beginning of the filename.
pattern could either be a line of ignore pattern or a string of multiple ignore patterns, which means we could just ignore().add() the content of a ignore file:
pattern could also be an ignore instance, so that we could easily inherit the rules of another Ignore instance.
REMOVED in 3.x for now.
To upgrade ignore@2.x up to 3.x, use
import fs from 'fs'
if (fs.existsSync(filename)) {
ignore().add(fs.readFileSync(filename).toString())
}instead.
Filters the given array of pathnames, and returns the filtered array.
Array.<Pathname> The array of pathnames to be filtered.Pathname Conventions:Pathname should be a path.relative()d pathnamePathname should be a string that have been path.join()ed, or the return value of path.relative() to the current directory,
// WRONG, an error will be thrown
ig.ignores('./abc')
// WRONG, for it will never happen, and an error will be thrown
// If the gitignore rule locates at the root directory,
// `'/abc'` should be changed to `'abc'`.
// ```
// path.relative('/', '/abc') -> 'abc'
// ```
ig.ignores('/abc')
// WRONG, that it is an absolute path on Windows, an error will be thrown
ig.ignores('C:\\abc')
// Right
ig.ignores('abc')
// Right
ig.ignores(path.join('./abc')) // path.join('./abc') -> 'abc'In other words, each Pathname here should be a relative path to the directory of the gitignore rules.
Suppose the dir structure is:
/path/to/your/repo
|-- a
| |-- a.js
|
|-- .b
|
|-- .c
|-- .DS_store
Then the paths might be like this:
node-ignore does NO fs.stat during path matching, so for the example below:
// First, we add a ignore pattern to ignore a directory
ig.add('config/')
// `ig` does NOT know if 'config', in the real world,
// is a normal file, directory or something.
ig.ignores('config')
// `ig` treats `config` as a file, so it returns `false`
ig.ignores('config/')
// returns `true`Specially for people who develop some library based on node-ignore, it is important to understand that.
Usually, you could use glob with option.mark = true to fetch the structure of the current directory:
import glob from 'glob'
glob('**', {
// Adds a / character to directory matches.
mark: true
}, (err, files) => {
if (err) {
return console.error(err)
}
let filtered = ignore().add(patterns).filter(files)
console.log(filtered)
})new in 3.2.0
Returns Boolean whether pathname should be ignored.
Creates a filter function which could filter an array of paths with Array.prototype.filter.
Returns function(path) the filter function.
Returns TestResult
interface TestResult {
ignored: boolean
// true if the `pathname` is finally unignored by some negative pattern
unignored: boolean
}{ignored: true, unignored: false}: the pathname is ignored{ignored: false, unignored: true}: the pathname is unignored{ignored: false, unignored: false}: the pathname is never matched by any ignore rules.options.ignorecase since 4.0.0Similar as the core.ignorecase option of git-config, node-ignore will be case insensitive if options.ignorecase is set to true (the default value), otherwise case sensitive.
ignore.isPathValid(pathname): boolean since 5.0.0Check whether the pathname is an valid path.relative()d path according to the convention.
This method is NOT used to check if an ignore pattern is valid.
Since 5.0.0, if an invalid Pathname passed into ig.ignores(), an error will be thrown, while ignore < 5.0.0 did not make sure what the return value was, as well as
.ignores(pathname: Pathname): boolean
.filter(pathnames: Array<Pathname>): Array<Pathname>
.createFilter(): (pathname: Pathname) => boolean
.test(pathname: Pathname): {ignored: boolean, unignored: boolean}See the convention here for details.
If there are invalid pathnames, the conversion and filtration should be done by users.
import {isPathValid} from 'ignore' // introduced in 5.0.0
const paths = [
// invalid
//////////////////
'',
false,
'../foo',
'.',
//////////////////
// valid
'foo'
]
.filter(isValidPath)
ig.filter(paths)Since 4.0.0, ignore will no longer support node < 6, to use ignore in node < 6:
options of 2.x are unnecessary and removed, so just remove them.ignore() instance is no longer an EventEmitter, and all events are unnecessary and removed..addIgnoreFile() is removed, see the .addIgnoreFile section for details.Binary Search Tree & AVL Tree (Self Balancing Tree) implementation in javascript.
| Binary Search Tree |
|
|
AVL Tree (Self Balancing Tree) |
|
Both trees have the same interface except that AVL tree will maintain itself balanced by rotating the nodes that become unbalanced during insertion and deletion. If your code requires a strictly balanced tree that always benefits from the log(n) runtime of insert & remove, you should use the AVL one.
inserts a node with key/value into the tree. Inserting an node with existing key, would update the existing node’s value with the new one. AVL tree will rotate nodes properly if the tree becomes unbalanced during insertion.
| params | |
|---|---|
| name | type |
| key | number or string |
| value | object |
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
| runtime |
|---|
| O(log(n)) |
bst.insert(50, 'v1');
bst.insert(80, 'v2');
bst.insert(30, 'v3');
bst.insert(90, 'v4');
bst.insert(60, 'v5');
bst.insert(40, 'v6');
bst.insert(20, 'v7');checks if a node exists by its key.
| params | |
|---|---|
| name | type |
| key | number or string |
| return |
|---|
| boolean |
| runtime |
|---|
| O(log(n)) |
finds a node in the tree by its key.
| params | |
|---|---|
| name | type |
| key | number or string |
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
| runtime |
|---|
| O(log(n)) |
const n60 = bst.find(60);
console.log(n60.getKey()); // 60
console.log(n60.getValue()); // v5
console.log(bst.find(100)); // nullfinds the node with min key in the tree.
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
| runtime |
|---|
| O(log(n)) |
finds the node with max key in the tree.
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
| runtime |
|---|
| O(log(n)) |
returns the root node of the tree.
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
| runtime |
|---|
| O(1) |
returns the count of nodes in the tree.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
traverses the tree in order (left-node-right).
| params | ||
|---|---|---|
| name | type | description |
| cb | function | called with each node |
| runtime |
|---|
| O(n) |
traverses the tree pre order (node-left-right).
| params | ||
|---|---|---|
| name | type | description |
| cb | function | called with each node |
| runtime |
|---|
| O(n) |
traverses the tree post order (left-right-node).
| params | ||
|---|---|---|
| name | type | description |
| cb | function | called with each node |
| runtime |
|---|
| O(n) |
removes a node from the tree by its key. AVL tree will rotate nodes properly if the tree becomes unbalanced during deletion.
| params | ||
|---|---|---|
| name | type | |
| key | number or string | |
| return |
|---|
| boolean |
| runtime |
|---|
| O(log(n)) |
clears the tree.
| runtime |
|---|
| O(1) |
returns the node’s key that is used to compare with other nodes.
| return |
|---|
| number or string |
change the value that is associated with a node.
| params | ||
|---|---|---|
| name | type | |
| value | object | |
returns the value that is associated with a node.
| return |
|---|
| object |
returns node’s left child node.
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
returns node’s right child node.
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
returns node’s parent node.
| return | |
|---|---|
| BinarySearchTree | BinarySearchTreeNode |
| AvlTree | AvlTreeNode |
extends BinarySearchTreeNode and adds the following methods:
the height of the node in the tree. root height is 1.
| return |
|---|
| number |
the height of the left child. 0 if no left child.
| return |
|---|
| number |
the height of the right child. 0 if no right child.
| return |
|---|
| number |
returns the node’s balance by subtracting right height from left height.
| return |
|---|
| number |
grunt build
Pass two numbers, get a regex-compatible source string for matching ranges. Validated against more than 2.78 million test assertions.
Install with npm:
Install with yarn:
What does this do?
This libary generates the source string to be passed to new RegExp() for matching a range of numbers.
Example
A string is returned so that you can do whatever you need with it before passing it to new RegExp() (like adding ^ or $ boundaries, defining flags, or combining it another string).
Why use this library?
Creating regular expressions for matching numbers gets deceptively complicated pretty fast.
For example, let’s say you need a validation regex for matching part of a user-id, postal code, social security number, tax id, etc:
1 => /1/ (easy enough)1 through 5 => /[1-5]/ (not bad…)1 or 5 => /(1|5)/ (still easy…)1 through 50 => /([1-9]|[1-4][0-9]|50)/ (uh-oh…)1 through 55 => /([1-9]|[1-4][0-9]|5[0-5])/ (no prob, I can do this…)1 through 555 => /([1-9]|[1-9][0-9]|[1-4][0-9]{2}|5[0-4][0-9]|55[0-5])/ (maybe not…)0001 through 5555 => /(0{3}[1-9]|0{2}[1-9][0-9]|0[1-9][0-9]{2}|[1-4][0-9]{3}|5[0-4][0-9]{2}|55[0-4][0-9]|555[0-5])/ (okay, I get the point!)The numbers are contrived, but they’re also really basic. In the real world you might need to generate a regex on-the-fly for validation.
Learn more
If you’re interested in learning more about character classes and other regex features, I personally have always found regular-expressions.info to be pretty useful.
As of April 27, 2017, this library runs 2,783,483 test assertions against generated regex-ranges to provide brute-force verification that results are indeed correct.
Tests run in ~870ms on my MacBook Pro, 2.5 GHz Intel Core i7.
Generated regular expressions are highly optimized:
? conditionals when number(s) or range(s) can be positive or negativeAdd this library to your javascript application with the following line of code
The main export is a function that takes two integers: the min value and max value (formatted as strings or numbers).
var source = toRegexRange('15', '95');
//=> 1[5-9]|[2-8][0-9]|9[0-5]
var re = new RegExp('^' + source + '$');
console.log(re.test('14')); //=> false
console.log(re.test('50')); //=> true
console.log(re.test('94')); //=> true
console.log(re.test('96')); //=> falseType: boolean
Deafault: undefined
Wrap the returned value in parentheses when there is more than one regex condition. Useful when you’re dynamically generating ranges.
console.log(toRegexRange('-10', '10'));
//=> -[1-9]|-?10|[0-9]
console.log(toRegexRange('-10', '10', {capture: true}));
//=> (-[1-9]|-?10|[0-9])Type: boolean
Deafault: undefined
Use the regex shorthand for [0-9]:
console.log(toRegexRange('0', '999999'));
//=> [0-9]|[1-9][0-9]{1,5}
console.log(toRegexRange('0', '999999', {shorthand: true}));
//=> \d|[1-9]\d{1,5}Type: boolean
Default: true
This option only applies to negative zero-padded ranges. By default, when a negative zero-padded range is defined, the number of leading zeros is relaxed using -0*.
console.log(toRegexRange('-001', '100'));
//=> -0*1|0{2}[0-9]|0[1-9][0-9]|100
console.log(toRegexRange('-001', '100', {relaxZeros: false}));
//=> -0{2}1|0{2}[0-9]|0[1-9][0-9]|100Why are zeros relaxed for negative zero-padded ranges by default?
Consider the following.
Note that -001 and 100 are both three digits long.
In most zero-padding implementations, only a single leading zero is enough to indicate that zero-padding should be applied. Thus, the leading zeros would be “corrected” on the negative range in the example to -01, instead of -001, to make total length of each string no greater than the length of the largest number in the range (in other words, -001 is 4 digits, but 100 is only three digits).
If zeros were not relaxed by default, you might expect the resulting regex of the above pattern to match -001 - given that it’s defined that way in the arguments - but it wouldn’t. It would, however, match -01. This gets even more ambiguous with large ranges, like -01 to 1000000.
Thus, we relax zeros by default to provide a more predictable experience for users.
| Range | Result | Compile time |
|---|---|---|
toRegexRange('5, 5') |
5 |
33μs |
toRegexRange('5, 6') |
5\|6 |
53μs |
toRegexRange('29, 51') |
29\|[34][0-9]\|5[01] |
699μs |
toRegexRange('31, 877') |
3[1-9]\|[4-9][0-9]\|[1-7][0-9]{2}\|8[0-6][0-9]\|87[0-7] |
711μs |
toRegexRange('111, 555') |
11[1-9]\|1[2-9][0-9]\|[2-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] |
62μs |
toRegexRange('-10, 10') |
-[1-9]\|-?10\|[0-9] |
74μs |
toRegexRange('-100, -10') |
-1[0-9]\|-[2-9][0-9]\|-100 |
49μs |
toRegexRange('-100, 100') |
-[1-9]\|-?[1-9][0-9]\|-?100\|[0-9] |
45μs |
toRegexRange('001, 100') |
0{2}[1-9]\|0[1-9][0-9]\|100 |
158μs |
toRegexRange('0010, 1000') |
0{2}1[0-9]\|0{2}[2-9][0-9]\|0[1-9][0-9]{2}\|1000 |
61μs |
toRegexRange('1, 2') |
1\|2 |
10μs |
toRegexRange('1, 5') |
[1-5] |
24μs |
toRegexRange('1, 10') |
[1-9]\|10 |
23μs |
toRegexRange('1, 100') |
[1-9]\|[1-9][0-9]\|100 |
30μs |
toRegexRange('1, 1000') |
[1-9]\|[1-9][0-9]{1,2}\|1000 |
52μs |
toRegexRange('1, 10000') |
[1-9]\|[1-9][0-9]{1,3}\|10000 |
47μs |
toRegexRange('1, 100000') |
[1-9]\|[1-9][0-9]{1,4}\|100000 |
44μs |
toRegexRange('1, 1000000') |
[1-9]\|[1-9][0-9]{1,5}\|1000000 |
49μs |
toRegexRange('1, 10000000') |
[1-9]\|[1-9][0-9]{1,6}\|10000000 |
63μs |
Order of arguments
When the min is larger than the max, values will be flipped to create a valid range:
Is effectively flipped to:
Steps / increments
This library does not support steps (increments). A pr to add support would be welcome.
New features
Adds support for zero-padding!
Optimizations
Repeating ranges are now grouped using quantifiers. rocessing time is roughly the same, but the generated regex is much smaller, which should result in faster matching.
Inspired by the python library range-regex.
step to… more | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on April 27, 2017. # yargs-parser
The mighty option parser used by yargs.
visit the yargs website for more examples, and thorough usage instructions.

or parse a string!
Convert an array of mixed types before passing to yargs-parser:
const parse = require('yargs-parser')
parse(['-f', 11, '--zoom', 55].join(' ')) // <-- array to string
parse(['-f', 11, '--zoom', 55].map(String)) // <-- array of stringsAs of v19 yargs-parser supports Deno:
import parser from "https://deno.land/x/yargs_parser/deno.ts";
const argv = parser('--foo=99 --bar=9987930', {
string: ['bar']
})
console.log(argv)As of v19 yargs-parser supports ESM (both in Node.js and in the browser):
Node.js:
import parser from 'yargs-parser'
const argv = parser('--foo=99 --bar=9987930', {
string: ['bar']
})
console.log(argv)Browsers:
<!doctype html>
<body>
<script type="module">
import parser from "https://unpkg.com/yargs-parser@19.0.0/browser.js";
const argv = parser('--foo=99 --bar=9987930', {
string: ['bar']
})
console.log(argv)
</script>
</body>Parses command line arguments returning a simple mapping of keys and values.
expects:
args: a string or array of strings representing the options to parse.opts: provide a set of hints indicating how args should be parsed:
opts.alias: an object representing the set of aliases for a key: {alias: {foo: ['f']}}.opts.array: indicate that keys should be parsed as an array: {array: ['foo', 'bar']}.{array: [{ key: 'foo', boolean: true }, {key: 'bar', number: true}]}.opts.boolean: arguments should be parsed as booleans: {boolean: ['x', 'y']}.opts.coerce: provide a custom synchronous function that returns a coerced value from the argument provided (or throws an error). For arrays the function is called only once for the entire array:{coerce: {foo: function (arg) {return modifiedArg}}}.opts.config: indicate a key that represents a path to a configuration file (this file will be loaded and parsed).opts.configObjects: configuration objects to parse, their properties will be set as arguments:{configObjects: [{'x': 5, 'y': 33}, {'z': 44}]}.opts.configuration: provide configuration options to the yargs-parser (see: configuration).opts.count: indicate a key that should be used as a counter, e.g., -vvv = {v: 3}.opts.default: provide default values for keys: {default: {x: 33, y: 'hello world!'}}.opts.envPrefix: environment variables (process.env) with the prefix provided should be parsed.opts.narg: specify that a key requires n arguments: {narg: {x: 2}}.opts.normalize: path.normalize() will be applied to values set to this key.opts.number: keys should be treated as numbers.opts.string: keys should be treated as strings (even if they resemble a number -x 33).returns:
obj: an object representing the parsed value of args
key/value: key value pairs for each argument and their aliases._: an array representing the positional arguments.--: an array with arguments after the end-of-options flag --.Parses a command line string, returning detailed information required by the yargs engine.
expects:
args: a string or array of strings representing options to parse.opts: provide a set of hints indicating how args, inputs are identical to require('yargs-parser')(args, opts={}).returns:
argv: an object representing the parsed value of args
key/value: key value pairs for each argument and their aliases._: an array representing the positional arguments.--: an array with arguments after the end-of-options flag --.error: populated with an error object if an exception occurred during parsing.aliases: the inferred list of aliases built by combining lists in opts.alias.newAliases: any new aliases added via camel-case expansion:
boolean: { fooBar: true }defaulted: any new argument created by opts.default, no aliases included.
boolean: { foo: true }configuration: given by default settings and opts.configuration.The yargs-parser applies several automated transformations on the keys provided in args. These features can be turned on and off using the configuration field of opts.
true.short-option-groups.Should a group of short-options be treated as boolean flags?
if disabled:
true.camel-case-expansion.Should hyphenated arguments be expanded into camel-case aliases?
if disabled:
truedot-notationShould keys that contain . be treated as objects?
if disabled:
trueparse-numbersShould keys that look like numbers be treated as such?
if disabled:
trueparse-positional-numbersShould positional keys that look like numbers be treated as such.
if disabled:
trueboolean-negationShould variables prefixed with --no be treated as negations?
if disabled:
falsecombine-arraysShould arrays be combined when provided by both command line arguments and a configuration file.
trueduplicate-arguments-arrayShould arguments be coerced into an array when duplicated:
if disabled:
trueflatten-duplicate-arraysShould array arguments be coerced into a single array when duplicated:
if disabled:
truegreedy-arraysShould arrays consume more than one positional argument following their flag.
if disabled:
Note: in v18.0.0 we are considering defaulting greedy arrays to false.
falsenargs-eats-optionsShould nargs consume dash options as well as positional arguments.
no-negation-prefixThe prefix to use for negated boolean variables.
if set to quux:
false.populate--Should unparsed flags be stored in -- or _.
If disabled:
If enabled:
false.set-placeholder-key.Should a placeholder be added for keys not set via the corresponding CLI argument?
If disabled:
If enabled:
false.halt-at-non-option.Should parsing stop at the first positional argument? This is similar to how e.g. ssh parses its command line.
If disabled:
If enabled:
falsestrip-aliasedShould aliases be removed before returning results?
If disabled:
node example.js --test-field 1
{ _: [], 'test-field': 1, testField: 1, 'test-alias': 1, testAlias: 1 }If enabled:
falsestrip-dashedShould dashed keys be removed before returning results? This option has no effect if camel-case-expansion is disabled.
If disabled:
If enabled:
falseunknown-options-as-argsShould unknown options be treated like regular arguments? An unknown option is one that is not configured in opts.
If disabled
node example.js --unknown-option --known-option 2 --string-option --unknown-option2
{ _: [], unknownOption: true, knownOption: 2, stringOption: '', unknownOption2: true }If enabled
node example.js --unknown-option --known-option 2 --string-option --unknown-option2
{ _: ['--unknown-option'], knownOption: 2, stringOption: '--unknown-option2' }Libraries in this ecosystem make a best effort to track Node.js’ release schedule. Here’s a post on why we think this is important.
The yargs project evolves from optimist and minimist. It owes its existence to a lot of James Halliday’s hard work. Thanks substack beep boop /
ISC
Snapdragon utility for creating a new AST node in custom code, such as plugins.
Install with npm:
With snapdragon v0.9.0 and higher you can use this.node() to create a new Node, whenever it makes sense.
var Node = require('snapdragon-node');
var Snapdragon = require('snapdragon');
var snapdragon = new Snapdragon();
// example usage inside a parser visitor function
snapdragon.parser.set('foo', function() {
// get the current "start" position
var pos = this.position();
// returns the match if regex matches the substring
// at the current position on `parser.input`
var match = this.match(/foo/);
if (match) {
// call "pos" on the node, to set the start and end
// positions, and return the node to push it onto the AST
// (snapdragon will push the node onto the correct
// nodes array, based on the stack)
return pos(new Node({type: 'bar', val: match[0]}));
}
});Create a new AST Node with the given val and type.
Params
val {String|Object}: Pass a matched substring, or an object to merge onto the node.type {String}: The node type to use when val is a string.returns {Object}: node instanceExample
Returns true if the given value is a node.
Params
node {Object}returns {Boolean}Example
var Node = require('snapdragon-node');
var node = new Node({type: 'foo'});
console.log(Node.isNode(node)); //=> true
console.log(Node.isNode({})); //=> falseDefine a non-enumberable property on the node instance. Useful for adding properties that shouldn’t be extended or visible during debugging.
Params
name {String}val {any}returns {Object}: returns the node instanceExample
Returns true if node.val is an empty string, or node.nodes does not contain any non-empty text nodes.
Params
fn {Function}: (optional) Filter function that is called on node and/or child nodes. isEmpty will return false immediately when the filter function returns false on any nodes.returns {Boolean}Example
var node = new Node({type: 'text'});
node.isEmpty(); //=> true
node.val = 'foo';
node.isEmpty(); //=> falseGiven node foo and node bar, push node bar onto foo.nodes, and set foo as bar.parent.
Params
node {Object}returns {Number}: Returns the length of node.nodesExample
Given node foo and node bar, unshift node bar onto foo.nodes, and set foo as bar.parent.
Params
node {Object}returns {Number}: Returns the length of node.nodesExample
Pop a node from node.nodes.
returns {Number}: Returns the popped nodeExample
var node = new Node({type: 'foo'});
node.push(new Node({type: 'a'}));
node.push(new Node({type: 'b'}));
node.push(new Node({type: 'c'}));
node.push(new Node({type: 'd'}));
console.log(node.nodes.length);
//=> 4
node.pop();
console.log(node.nodes.length);
//=> 3Shift a node from node.nodes.
returns {Object}: Returns the shifted nodeExample
var node = new Node({type: 'foo'});
node.push(new Node({type: 'a'}));
node.push(new Node({type: 'b'}));
node.push(new Node({type: 'c'}));
node.push(new Node({type: 'd'}));
console.log(node.nodes.length);
//=> 4
node.shift();
console.log(node.nodes.length);
//=> 3Remove node from node.nodes.
Params
node {Object}returns {Object}: Returns the removed node.Example
Get the first child node from node.nodes that matches the given type. If type is a number, the child node at that index is returned.
Params
type {String}returns {Object}: Returns a child node or undefined.Example
var child = node.find(1); //<= index of the node to get
var child = node.find('foo'); //<= node.type of a child node
var child = node.find(/^(foo|bar)$/); //<= regex to match node.type
var child = node.find(['foo', 'bar']); //<= array of node.type(s)Return true if the node is the given type.
Params
type {String}returns {Boolean}Example
var node = new Node({type: 'bar'});
cosole.log(node.isType('foo')); // false
cosole.log(node.isType(/^(foo|bar)$/)); // true
cosole.log(node.isType(['foo', 'bar'])); // trueReturn true if the node.nodes has the given type.
Params
type {String}returns {Boolean}Example
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
foo.push(bar);
cosole.log(foo.hasType('qux')); // false
cosole.log(foo.hasType(/^(qux|bar)$/)); // true
cosole.log(foo.hasType(['qux', 'bar'])); // truereturns {Array}Example
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
foo.push(bar);
foo.push(baz);
console.log(bar.siblings.length) // 2
console.log(baz.siblings.length) // 2returns {Number}Example
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
var qux = new Node({type: 'qux'});
foo.push(bar);
foo.push(baz);
foo.unshift(qux);
console.log(bar.index) // 1
console.log(baz.index) // 2
console.log(qux.index) // 0returns {Object}Example
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
foo.push(bar);
foo.push(baz);
console.log(baz.prev.type) // 'bar'returns {Object}Example
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
foo.push(bar);
foo.push(baz);
console.log(bar.siblings.length) // 2
console.log(baz.siblings.length) // 2returns {Object}: The first node, or undefiendExample
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
var qux = new Node({type: 'qux'});
foo.push(bar);
foo.push(baz);
foo.push(qux);
console.log(foo.first.type) // 'bar'returns {Object}: The last node, or undefiendExample
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
var qux = new Node({type: 'qux'});
foo.push(bar);
foo.push(baz);
foo.push(qux);
console.log(foo.last.type) // 'qux'returns {Object}: The last node, or undefiendExample
var foo = new Node({type: 'foo'});
var bar = new Node({type: 'bar'});
var baz = new Node({type: 'baz'});
var qux = new Node({type: 'qux'});
foo.push(bar);
foo.push(baz);
foo.push(qux);
console.log(foo.last.type) // 'qux'Changelog entries are classified using the following labels from keep-a-changelog:
added: for new featureschanged: for changes in existing functionalitydeprecated: for once-stable features removed in upcoming releasesremoved: for deprecated features removed in this releasefixed: for any bug fixesCustom labels used in this changelog:
dependencies: bumps dependencieshousekeeping: code re-organization, minor edits, or other changes that don’t fit in one of the other categories.Changed
Added
First release.
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Please read the contributing guide for advice on opening issues, pull requests, and coding standards.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
A parser that converts TypeScript source code into an ESTree-compatible form
You can find our Getting Started docs here
This parser is somewhat generic and robust, and could be used to power any use-case which requires taking TypeScript source code and producing an ESTree-compatible AST.
In fact, it is already used within these hyper-popular open-source projects to power their TypeScript support:
parse(code, options)Parses the given string of code with the options provided and returns an ESTree-compatible AST.
interface ParseOptions {
/**
* create a top-level comments array containing all comments
*/
comment?: boolean;
/**
* An array of modules to turn explicit debugging on for.
* - 'typescript-eslint' is the same as setting the env var `DEBUG=typescript-eslint:*`
* - 'eslint' is the same as setting the env var `DEBUG=eslint:*`
* - 'typescript' is the same as setting `extendedDiagnostics: true` in your tsconfig compilerOptions
*
* For convenience, also supports a boolean:
* - true === ['typescript-eslint']
* - false === []
*/
debugLevel?: boolean | ('typescript-eslint' | 'eslint' | 'typescript')[];
/**
* Cause the parser to error if it encounters an unknown AST node type (useful for testing).
* This case only usually occurs when TypeScript releases new features.
*/
errorOnUnknownASTType?: boolean;
/**
* Absolute (or relative to `cwd`) path to the file being parsed.
*/
filePath?: string;
/**
* Enable parsing of JSX.
* For more details, see https://www.typescriptlang.org/docs/handbook/jsx.html
*
* NOTE: this setting does not effect known file types (.js, .jsx, .ts, .tsx, .json) because the
* TypeScript compiler has its own internal handling for known file extensions.
*
* For the exact behavior, see https://github.com/typescript-eslint/typescript-eslint/tree/master/packages/parser#parseroptionsecmafeaturesjsx
*/
jsx?: boolean;
/**
* Controls whether the `loc` information to each node.
* The `loc` property is an object which contains the exact line/column the node starts/ends on.
* This is similar to the `range` property, except it is line/column relative.
*/
loc?: boolean;
/*
* Allows overriding of function used for logging.
* When value is `false`, no logging will occur.
* When value is not provided, `console.log()` will be used.
*/
loggerFn?: Function | false;
/**
* Controls whether the `range` property is included on AST nodes.
* The `range` property is a [number, number] which indicates the start/end index of the node in the file contents.
* This is similar to the `loc` property, except this is the absolute index.
*/
range?: boolean;
/**
* Set to true to create a top-level array containing all tokens from the file.
*/
tokens?: boolean;
/*
* The JSX AST changed the node type for string literals
* inside a JSX Element from `Literal` to `JSXText`.
* When value is `true`, these nodes will be parsed as type `JSXText`.
* When value is `false`, these nodes will be parsed as type `Literal`.
*/
useJSXTextNode?: boolean;
}
const PARSE_DEFAULT_OPTIONS: ParseOptions = {
comment: false,
errorOnUnknownASTType: false,
filePath: 'estree.ts', // or 'estree.tsx', if you pass jsx: true
jsx: false,
loc: false,
loggerFn: undefined,
range: false,
tokens: false,
useJSXTextNode: false,
};
declare function parse(
code: string,
options: ParseOptions = PARSE_DEFAULT_OPTIONS,
): TSESTree.Program;Example usage:
import { parse } from '@typescript-eslint/typescript-estree';
const code = `const hello: string = 'world';`;
const ast = parse(code, {
loc: true,
range: true,
});parseAndGenerateServices(code, options)Parses the given string of code with the options provided and returns an ESTree-compatible AST. Accepts additional options which can be used to generate type information along with the AST.
interface ParseAndGenerateServicesOptions extends ParseOptions {
/**
* Causes the parser to error if the TypeScript compiler returns any unexpected syntax/semantic errors.
*/
errorOnTypeScriptSyntacticAndSemanticIssues?: boolean;
/**
* ***EXPERIMENTAL FLAG*** - Use this at your own risk.
*
* Causes TS to use the source files for referenced projects instead of the compiled .d.ts files.
* This feature is not yet optimized, and is likely to cause OOMs for medium to large projects.
*
* This flag REQUIRES at least TS v3.9, otherwise it does nothing.
*
* See: https://github.com/typescript-eslint/typescript-eslint/issues/2094
*/
EXPERIMENTAL_useSourceOfProjectReferenceRedirect?: boolean;
/**
* When `project` is provided, this controls the non-standard file extensions which will be parsed.
* It accepts an array of file extensions, each preceded by a `.`.
*/
extraFileExtensions?: string[];
/**
* Absolute (or relative to `tsconfigRootDir`) path to the file being parsed.
* When `project` is provided, this is required, as it is used to fetch the file from the TypeScript compiler's cache.
*/
filePath?: string;
/**
* Allows the user to control whether or not two-way AST node maps are preserved
* during the AST conversion process.
*
* By default: the AST node maps are NOT preserved, unless `project` has been specified,
* in which case the maps are made available on the returned `parserServices`.
*
* NOTE: If `preserveNodeMaps` is explicitly set by the user, it will be respected,
* regardless of whether or not `project` is in use.
*/
preserveNodeMaps?: boolean;
/**
* Absolute (or relative to `tsconfigRootDir`) paths to the tsconfig(s).
* If this is provided, type information will be returned.
*/
project?: string | string[];
/**
* If you provide a glob (or globs) to the project option, you can use this option to ignore certain folders from
* being matched by the globs.
* This accepts an array of globs to ignore.
*
* By default, this is set to ["/node_modules/"]
*/
projectFolderIgnoreList?: string[];
/**
* The absolute path to the root directory for all provided `project`s.
*/
tsconfigRootDir?: string;
/**
***************************************************************************************
* IT IS RECOMMENDED THAT YOU DO NOT USE THIS OPTION, AS IT CAUSES PERFORMANCE ISSUES. *
***************************************************************************************
*
* When passed with `project`, this allows the parser to create a catch-all, default program.
* This means that if the parser encounters a file not included in any of the provided `project`s,
* it will not error, but will instead parse the file and its dependencies in a new program.
*/
createDefaultProgram?: boolean;
}
interface ParserServices {
program: ts.Program;
esTreeNodeToTSNodeMap: WeakMap<TSESTree.Node, ts.Node | ts.Token>;
tsNodeToESTreeNodeMap: WeakMap<ts.Node | ts.Token, TSESTree.Node>;
hasFullTypeInformation: boolean;
}
interface ParseAndGenerateServicesResult<T extends TSESTreeOptions> {
ast: TSESTree.Program;
services: ParserServices;
}
const PARSE_AND_GENERATE_SERVICES_DEFAULT_OPTIONS: ParseOptions = {
...PARSE_DEFAULT_OPTIONS,
errorOnTypeScriptSyntacticAndSemanticIssues: false,
extraFileExtensions: [],
preserveNodeMaps: false, // or true, if you do not set this, but pass `project`
project: undefined,
projectFolderIgnoreList: ['/node_modules/'],
tsconfigRootDir: process.cwd(),
};
declare function parseAndGenerateServices(
code: string,
options: ParseOptions = PARSE_DEFAULT_OPTIONS,
): ParseAndGenerateServicesResult;Example usage:
import { parseAndGenerateServices } from '@typescript-eslint/typescript-estree';
const code = `const hello: string = 'world';`;
const { ast, services } = parseAndGenerateServices(code, {
filePath: '/some/path/to/file/foo.ts',
loc: true,
project: './tsconfig.json',
range: true,
});parseWithNodeMaps(code, options)Parses the given string of code with the options provided and returns both the ESTree-compatible AST as well as the node maps. This allows you to work with both ASTs without the overhead of types that may come with parseAndGenerateServices.
interface ParseWithNodeMapsResult<T extends TSESTreeOptions> {
ast: TSESTree.Program;
esTreeNodeToTSNodeMap: ParserServices['esTreeNodeToTSNodeMap'];
tsNodeToESTreeNodeMap: ParserServices['tsNodeToESTreeNodeMap'];
}
declare function parseWithNodeMaps(
code: string,
options: ParseOptions = PARSE_DEFAULT_OPTIONS,
): ParseWithNodeMapsResult;Example usage:
import { parseWithNodeMaps } from '@typescript-eslint/typescript-estree';
const code = `const hello: string = 'world';`;
const { ast, esTreeNodeToTSNodeMap, tsNodeToESTreeNodeMap } = parseWithNodeMaps(
code,
{
loc: true,
range: true,
},
);TSESTree, AST_NODE_TYPES and AST_TOKEN_TYPESTypes for the AST produced by the parse functions.
TSESTree is a namespace which contains object types representing all of the AST Nodes produced by the parser.AST_NODE_TYPES is an enum which provides the values for every single AST node’s type property.AST_TOKEN_TYPES is an enum which provides the values for every single AST token’s type property.If you use a non-supported version of TypeScript, the parser will log a warning to the console.
Please ensure that you are using a supported version before submitting any issues/bug reports.
Please check the current list of open and known issues and ensure the issue has not been reported before. When creating a new issue provide as much information about your environment as possible. This includes:
typescript-estree versionA couple of years after work on this parser began, the TypeScript Team at Microsoft began officially supporting TypeScript parsing via Babel.
I work closely with the TypeScript Team and we are gradually aligning the AST of this project with the one produced by Babel’s parser. To that end, I have created a full test harness to compare the ASTs of the two projects which runs on every PR, please see the code for more details.
If you encounter a bug with the parser that you want to investigate, you can turn on the debug logging via setting the environment variable: DEBUG=typescript-eslint:*. I.e. in this repo you can run: DEBUG=typescript-eslint:* yarn lint.
See the contributing guide here
|
creates an empty graph
adds a vertex to the graph.
| params | |
|---|---|
| name | type |
| key | number or string |
| value | object |
| return |
|---|
| Vertex |
| runtime |
|---|
| O(1) |
directedGraph.addVertex('v1', 1);
directedGraph.addVertex('v1', 1);
directedGraph.addVertex('v2', 2);
directedGraph.addVertex('v3', 3);
directedGraph.addVertex('v4', 4);
directedGraph.addVertex('v5', 5);
graph.addVertex('v1', true);
graph.addVertex('v2', true);
graph.addVertex('v3', true);
graph.addVertex('v4', true);
graph.addVertex('v5', true);| params | |
|---|---|
| name | type |
| key | number or string |
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
gets the number of vertices in the graph.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
adds an edge with a weight between two existings vertices. Default weight is 1 if not defined. The edge is a direction from source to destination when added in a directed graph, and a connecting two-way edge when added in a graph.
| params | ||
|---|---|---|
| name | type | description |
| srcKey | number or string | the source vertex key |
| destKey | number or string | the destination vertex key |
| weight | number | the weight of the edge |
| runtime |
|---|
| O(1) |
directedGraph.addEdge('v1', 'v2', 2);
directedGraph.addEdge('v1', 'v3', 3);
directedGraph.addEdge('v1', 'v4', 1);
directedGraph.addEdge('v2', 'v4', 1);
directedGraph.addEdge('v3', 'v5', 2);
directedGraph.addEdge('v4', 'v3', 1);
directedGraph.addEdge('v4', 'v5', 4);
graph.addEdge('v1', 'v2', 2);
graph.addEdge('v2', 'v3', 3);
graph.addEdge('v1', 'v3', 6);
graph.addEdge('v2', 'v4', 1);
graph.addEdge('v4', 'v3', 1);
graph.addEdge('v4', 'v5', 4);
graph.addEdge('v3', 'v5', 2);checks if the graph has an edge between two existing vertices. In directed graph, it returns true only if there is a direction from source to destination.
| params | ||
|---|---|---|
| name | type | description |
| srcKey | number or string | the source vertex key |
| destKey | number or string | the destination vertex key |
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
console.log(directedGraph.hasEdge('v1', 'v2')); // true
console.log(directedGraph.hasEdge('v2', 'v1')); // false
console.log(graph.hasEdge('v1', 'v2')); // true
console.log(graph.hasEdge('v2', 'v1')); // truegets the number of edges in the graph.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
gets the edge’s weight between two vertices in the graph. If there is no direct edge between the two vertices, it returns null. It also returns 0 if the source key is equal to destination key.
| params | ||
|---|---|---|
| name | type | description |
| srcKey | number or string | the source vertex key |
| destKey | number or string | the destination vertex key |
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
console.log(directedGraph.getWeight('v1', 'v2')); // 2
console.log(directedGraph.getWeight('v2', 'v1')); // null
console.log(directedGraph.getWeight('v1', 'v1')); // 0
console.log(graph.getWeight('v1', 'v2')); // 2
console.log(graph.getWeight('v2', 'v1')); // 2
console.log(graph.getWeight('v1', 'v1')); // 0
console.log(graph.getWeight('v1', 'v4')); // nullremoves a vertex with all its edges from the graph by its key.
| params | ||
|---|---|---|
| name | type | description |
| key | number or string | the vertex key |
| return |
|---|
| boolean |
| runtime | |
|---|---|
| Graph | O(K) : K = number of connected edges to the vertex |
| Directed Graph | O(E) : E = number of edges in the graph |
directedGraph.removeVertex('v5');
console.log(directedGraph.verticesCount()); // 4
console.log(directedGraph.edgesCount()); // 5
graph.removeVertex('v5');
console.log(graph.verticesCount()); // 4
console.log(graph.edgesCount()); // 5removes an edge between two existing vertices
| params | ||
|---|---|---|
| name | type | description |
| srcKey | number or string | the source vertex key |
| destKey | number or string | the destination vertex key |
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
directedGraph.removeEdge('v1', 'v3'); // true
console.log(directedGraph.edgesCount()); // 4
graph.removeEdge('v2', 'v3'); // true
console.log(graph.edgesCount()); // 4removes all connected edges to a vertex by its key.
| params | ||
|---|---|---|
| name | type | description |
| key | number or string | the vertex key |
| return | description |
|---|---|
| number | number of removed edges |
| runtime | |
|---|---|
| Graph | O(K) : K = number of connected edges to the vertex |
| Directed Graph | O(E) : E = number of edges in the graph |
const dg = new DirectedGraph();
dg.addVertex('v1');
dg.addVertex('v2');
dg.addVertex('v3');
dg.addEdge('v1', 'v2');
dg.addEdge('v2', 'v1'); // this is counted as a direction in directed graph.
dg.addEdge('v1', 'v3');
dg.removeEdges('v1'); // 3
const g = new Graph();
g.addVertex('v1');
g.addVertex('v2');
g.addVertex('v3');
g.addEdge('v1', 'v2');
g.addEdge('v1', 'v3');
g.removeEdges('v1'); // 2traverses the graph using the depth-first recursive search.
| params | ||
|---|---|---|
| name | type | description |
| srcKey | number or string | the starting vertex key |
| cb | function | the callback that is called with each vertex |
| runtime |
|---|
| O(V) : V = the number of vertices in the graph |
directedGraph.traverseDfs('v1', (v) => console.log(`${v.getKey()}:${v.getValue()}`));
/*
v1:1
v2:2
v4:4
v3:3
*/
graph.traverseDfs('v1', (v) => console.log(v.serialize()));
/*
{ key: 'v1', value: true }
{ key: 'v2', value: true }
{ key: 'v4', value: true }
{ key: 'v3', value: true }
*/traverses the graph using the breadth-first search with a queue.
| params | ||
|---|---|---|
| name | type | description |
| srcKey | number or string | the starting vertex key |
| cb | function | the callback that is called with each vertex |
| runtime |
|---|
| O(V) : V = the number of vertices in the graph |
directedGraph.traverseBfs('v1', (v) => console.log(`${v.getKey()}:${v.getValue()}`));
/*
v1:1
v2:2
v4:4
v3:3
*/
graph.traverseBfs('v1', (v) => console.log(v.serialize()));
/*
{ key: 'v1', value: true }
{ key: 'v2', value: true }
{ key: 'v3', value: true }
{ key: 'v4', value: true }
*/clears all vertices and edges in the graph.
| runtime |
|---|
| O(1) |
directedGraph.clear();
console.log(directedGraph.verticesCount()); // 0
console.log(directedGraph.edgesCount()); // 0
graph.clear();
console.log(graph.verticesCount()); // 0
console.log(graph.edgesCount()); // 0returns the vertex key.
| return |
|---|
| string or number |
returns the vertex associated value.
| return |
|---|
| object |
grunt build
Extended glob support for JavaScript. Adds (almost) the expressive power of regular expressions to glob patterns.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Heads up!: This library only supports extglobs, to handle full glob patterns and other extended globbing features use micromatch instead.
The main export is a function that takes a string and options, and returns an object with the parsed AST and the compiled .output, which is a regex-compatible string that can be used for matching.
| pattern | regex equivalent | description |
|---|---|---|
?(pattern-list) |
(...|...)? |
Matches zero or one occurrence of the given pattern(s) |
*(pattern-list) |
(...|...)* |
Matches zero or more occurrences of the given pattern(s) |
+(pattern-list) |
(...|...)+ |
Matches one or more occurrences of the given pattern(s) |
@(pattern-list) |
(...|...) 1 |
Matches one of the given pattern(s) |
!(pattern-list) |
N/A | Matches anything except one of the given pattern(s) |
Convert the given extglob pattern into a regex-compatible string. Returns an object with the compiled result and the parsed AST.
Params
pattern {String}options {Object}returns {String}Example
var extglob = require('extglob');
console.log(extglob('*.!(*a)'));
//=> '(?!\\.)[^/]*?\\.(?!(?!\\.)[^/]*?a\\b).*?'Takes an array of strings and an extglob pattern and returns a new array that contains only the strings that match the pattern.
Params
list {Array}: Array of strings to matchpattern {String}: Extglob patternoptions {Object}returns {Array}: Returns an array of matchesExample
var extglob = require('extglob');
console.log(extglob.match(['a.a', 'a.b', 'a.c'], '*.!(*a)'));
//=> ['a.b', 'a.c']Returns true if the specified string matches the given extglob pattern.
Params
string {String}: String to matchpattern {String}: Extglob patternoptions {String}returns {Boolean}Example
var extglob = require('extglob');
console.log(extglob.isMatch('a.a', '*.!(*a)'));
//=> false
console.log(extglob.isMatch('a.b', '*.!(*a)'));
//=> trueReturns true if the given string contains the given pattern. Similar to .isMatch but the pattern can match any part of the string.
Params
str {String}: The string to match.pattern {String}: Glob pattern to use for matching.options {Object}returns {Boolean}: Returns true if the patter matches any part of str.Example
var extglob = require('extglob');
console.log(extglob.contains('aa/bb/cc', '*b'));
//=> true
console.log(extglob.contains('aa/bb/cc', '*d'));
//=> falseTakes an extglob pattern and returns a matcher function. The returned function takes the string to match as its only argument.
Params
pattern {String}: Extglob patternoptions {String}returns {Boolean}Example
var extglob = require('extglob');
var isMatch = extglob.matcher('*.!(*a)');
console.log(isMatch('a.a'));
//=> false
console.log(isMatch('a.b'));
//=> trueConvert the given extglob pattern into a regex-compatible string. Returns an object with the compiled result and the parsed AST.
Params
str {String}options {Object}returns {String}Example
var extglob = require('extglob');
console.log(extglob.create('*.!(*a)').output);
//=> '(?!\\.)[^/]*?\\.(?!(?!\\.)[^/]*?a\\b).*?'Returns an array of matches captured by pattern in string, or null if the pattern did not match.
Params
pattern {String}: Glob pattern to use for matching.string {String}: String to matchoptions {Object}: See available options for changing how matches are performedreturns {Boolean}: Returns an array of captures if the string matches the glob pattern, otherwise null.Example
var extglob = require('extglob');
extglob.capture(pattern, string[, options]);
console.log(extglob.capture('test/*.js', 'test/foo.js'));
//=> ['foo']
console.log(extglob.capture('test/*.js', 'foo/bar.css'));
//=> nullCreate a regular expression from the given pattern and options.
Params
pattern {String}: The pattern to convert to regex.options {Object}returns {RegExp}Example
var extglob = require('extglob');
var re = extglob.makeRe('*.!(*a)');
console.log(re);
//=> /^[^\/]*?\.(?![^\/]*?a)[^\/]*?$/Available options are based on the options from Bash (and the option names used in bash).
Type: boolean
Default: undefined
When enabled, the pattern itself will be returned when no matches are found.
Alias for options.nullglob, included for parity with minimatch.
Type: boolean
Default: undefined
Functions are memoized based on the given glob patterns and options. Disable memoization by setting options.cache to false.
Type: boolean
Default: undefined
Throw an error is no matches are found.
Last run on December 21, 2017
# negation-nested (49 bytes)
extglob x 2,228,255 ops/sec ±0.98% (89 runs sampled)
minimatch x 207,875 ops/sec ±0.61% (91 runs sampled)
fastest is extglob (by 1072% avg)
# negation-simple (43 bytes)
extglob x 2,205,668 ops/sec ±1.00% (91 runs sampled)
minimatch x 311,923 ops/sec ±1.25% (91 runs sampled)
fastest is extglob (by 707% avg)
# range-false (57 bytes)
extglob x 2,263,877 ops/sec ±0.40% (94 runs sampled)
minimatch x 271,372 ops/sec ±1.02% (91 runs sampled)
fastest is extglob (by 834% avg)
# range-true (56 bytes)
extglob x 2,161,891 ops/sec ±0.41% (92 runs sampled)
minimatch x 268,265 ops/sec ±1.17% (91 runs sampled)
fastest is extglob (by 806% avg)
# star-simple (46 bytes)
extglob x 2,211,081 ops/sec ±0.49% (92 runs sampled)
minimatch x 343,319 ops/sec ±0.59% (91 runs sampled)
fastest is extglob (by 644% avg)This library has complete parity with Bash 4.3 with only a couple of minor differences.
options.contains to true.Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
step to… more | homepage| Commits | Contributor |
|---|---|
| 49 | jonschlinkert |
| 2 | isiahmeadows |
| 1 | doowb |
| 1 | devongovett |
| 1 | mjbvz |
| 1 | shinnn |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on December 21, 2017.
@ isn “’t a RegEx character.” ↩
A suite of pre-built Dojo widgets, ready to use in your web application. These widgets are built using Dojo’s widget authoring system [(@dojo/framework/core)](https://github.com/dojo/framework/blob/master/src/core/README.md).
To use @dojo/widgets in your project, you will need to install the package:
This package contains all of the widgets in this repo.
All of the widgets are on the same release schedule, that is to say, that we release all widgets at the same time. Minor releases may include new widgets and/or features, whereas patch releases may contain fixes to more than 1 widget.
To use a widget in your application, you will need to import each widget individually:
Each widget module has a default export of the widget itself, as well as named exports for things such as properties specific to the widget:
Because each widget is a separate module, when you create a release build of your application, you will only include the widgets that you have explicitly imported. This allows our dojo cli build tooling to make sure that the production build of your application only includes the widgets you use and is as small as possible.
All widgets are supported in all evergreen browsers (Chrome, Edge, Firefox, IE11+, and Safari) as well as popular mobile browsers (Mobile Safari, Chrome on Android).
All widgets are designed to be accessible. If custom ARIA semantics are required, widgets have an aria property that may be passed an object with custom aria-* attributes.
All widgets are fully themeable. Example themes are available in the [@dojo/themes](https://github.com/dojo/themes) repository.
All widgets support internationalization (i18n)
Live examples of current widgets are available in the widget showcase.
You can register event handlers that get called when the corresponding events occur by passing the handlers into a widget’s properties. The naming convention for event handlers is as follows:
onRequest[X], e.g. for a close event, the event handler called by the child widget must be called onRequestClose
Here the child widget is requesting that the close event take place.
Request naming convention is dropped:on[X], e.g. for a dismiss event, the event handler called by the child widget must be called onDismiss
We use font awesome for icons. Where a theme requires specific icons that are not part of the Font Awesome set, then those themes will ship their own icons.
Icon fonts are generated using IcoMoon. If a new icon is required, it is possible to upload the current dojoSelect.json from src/theme/fonts and then add new icons by selecting from the Font Awesome library. After selecting the new icons from the library, merge them down into the current icon set, then delete the rest of the Font Awesome icons that were added by IcoMoon. After this you can export and download them as a zip. Once downloaded you will also need to unzip them and replace the font files (svg, woff, ttf) in src/theme/fonts. Now download the new selection JSON file from the projects page of IcoMoon and replace the current dojoSelection.json file.
To make use of the new icons it is necessary to update the icon.m.css file in the theme folder with the new unicode icon like so:
.newIcon:before {
content: "\f123";
}
Where \f123 is the unicode character for the new icon. To check the new icon works you can render it in the src/widgets/examples/icon/Basic.tsx to make sure everything renders correctly.
There is an icon widget that aids in creating in proper semantics and provides type-checking for the type of icon.
px vs. em - we specify font sizes in px. When creating a widget, spacing (margin, padding) should be specified using px unless the design calls for proportional spacing, in which case em can be used.
Widgets adhere to a basic convention for using specific ranges of z-index values based on function and visual context. This convention is followed in both individual widget CSS and in the Dojo theme styles. These values can be overridden in a custom theme if necessary since no z-index values are set in fixed styles.
The range definitions are as follows:
There are many ways in which you can customize the behavior and appearance of Dojo widgets. See the core README for examples of how to customize the theme or a specific CSS class of a widget.
Or can you write your own widget that extends an official widget.
Because all Dojo widgets are Classes, you can simply extend the Class to add or change its behavior.
Dojo widgets provide standard extension points to allow you to customize their behavior. For more details, please refer to the widget authoring system.
Individual widgets also provide certain types of extension points where applicable: - render*: Large render functions are split up into multiple smaller pieces that can be more easily overridden to create custom vdom. - getModifierClasses: Modify the array of conditionally applied classes like css.selected or css.disabled. Not all widgets include these extension points, and some have additional overridable methods.
When writing a widget variant, ie. RaisedButton, you should ensure that you use theme.compose from the widget theme middleware. This allows your variant to interit css from it’s base widget whilst allowing it to be themed separately.
We appreciate your interest! Please see the Dojo Meta Repository for the Contributing Guidelines and Style Guide.
Note that all changes to widgets should work with the dojo theme. To test this start the example page (instructions at Installation section) and select the dojo option at the top of the page.
To start working with this package, clone the repository and run npm install.
In order to build the project run npm run build.
Test cases MUST be written using Intern using the Object test interface and Assert assertion interface.
90% branch coverage MUST be provided for all code submitted to this repository, as reported by istanbul’s combined coverage results for all supported platforms.
To test locally in node run:
npm run test
The Dojo widget examples application is located in src/examples.
To add a new example, create a directory that matches the directory name of the widget e.g. src/examples/src/widgets/text-input. Each widget must have an example called Basic.tsx and an entry in the src/examples/src/config.ts keyed by the name of the widget directory. The widget example should import widgets from @dojo/widgets and not via a relative import. It is very important that the config entry name (ie. text-input) matches the folder name / css file name of the widget otherwise the doc build will fail.
{
'text-input: {
filename: 'index',
overview: {
example: {
module: BasicCheckbox,
filename: 'Basic'
}
},
examples: [
{
title: 'The example title',
description: 'Optional example description',
module: OtherCheckbox,
filename: 'Other'
}
]
}
}index'Basic')To view the examples locally run npm run dev in the root directory and navigate to http://localhost:9999, this starts the examples in watch mode and should update widget module are changed. Note that you do not have to install dependencies in the src/examples project, this will result in an error.
The widget examples and documentation is automatically generated by the examples application when built with the docs feature flag set to true. The site relies on a few conventions in order to be able do this:
Properties, e.g. for text-input the properties interface would be TextInputPropertiessrc/theme and match the name of the widget directory e.g. text-input.m.cssREADME.md file in their root directory.interface ExampleProperties {
/** This is the description for foo */
foo: string;
/** This is the description for bar */
bar: string;
}To build the documentation run npm run build:docs and to build and serve the documentation in watch mode run npm run build:docs:dev
The examples also run on Codesanbox, to run the examples on the master branch go to https://codesandbox.io/s/github/dojo/widgets/tree/master/src/examples. To run the examples for a specific user/branch/tag adjust the url as required.
Port of TweetNaCl / NaCl to JavaScript for modern browsers and Node.js. Public domain.
Demo: https://tweetnacl.js.org
:warning: The library is stable and API is frozen, however it has not been independently reviewed. If you can help reviewing it, please contact me.
The primary goal of this project is to produce a translation of TweetNaCl to JavaScript which is as close as possible to the original C implementation, plus a thin layer of idiomatic high-level API on top of it.
There are two versions, you can use either of them:
nacl.js is the port of TweetNaCl with minimum differences from the original + high-level API.
nacl-fast.js is like nacl.js, but with some functions replaced with faster versions.
You can install TweetNaCl.js via a package manager:
$ bower install tweetnacl
NPM:
npm install tweetnacl
All API functions accept and return bytes as Uint8Arrays. If you need to encode or decode strings, use functions from https://github.com/dchest/tweetnacl-util-js or one of the more robust codec packages.
In Node.js v4 and later Buffer objects are backed by Uint8Arrays, so you can freely pass them to TweetNaCl.js functions as arguments. The returned objects are still Uint8Arrays, so if you need Buffers, you’ll have to convert them manually; make sure to convert using copying: new Buffer(array), instead of sharing: new Buffer(array.buffer), because some functions return subarrays of their buffers.
Implements curve25519-xsalsa20-poly1305.
Generates a new random key pair for box and returns it as an object with publicKey and secretKey members:
{
publicKey: ..., // Uint8Array with 32-byte public key
secretKey: ... // Uint8Array with 32-byte secret key
}
Returns a key pair for box with public key corresponding to the given secret key.
Encrypt and authenticates message using peer’s public key, our secret key, and the given nonce, which must be unique for each distinct message for a key pair.
Returns an encrypted and authenticated message, which is nacl.box.overheadLength longer than the original message.
Authenticates and decrypts the given box with peer’s public key, our secret key, and the given nonce.
Returns the original message, or false if authentication fails.
Returns a precomputed shared key which can be used in nacl.box.after and nacl.box.open.after.
Same as nacl.box, but uses a shared key precomputed with nacl.box.before.
Same as nacl.box.open, but uses a shared key precomputed with nacl.box.before.
Length of public key in bytes.
Length of secret key in bytes.
Length of precomputed shared key in bytes.
Length of nonce in bytes.
Length of overhead added to box compared to original message.
Implements xsalsa20-poly1305.
Encrypt and authenticates message using the key and the nonce. The nonce must be unique for each distinct message for this key.
Returns an encrypted and authenticated message, which is nacl.secretbox.overheadLength longer than the original message.
Authenticates and decrypts the given secret box using the key and the nonce.
Returns the original message, or false if authentication fails.
Length of key in bytes.
Length of nonce in bytes.
Length of overhead added to secret box compared to original message.
Implements curve25519.
Multiplies an integer n by a group element p and returns the resulting group element.
Multiplies an integer n by a standard group element and returns the resulting group element.
Length of scalar in bytes.
Length of group element in bytes.
Implements ed25519.
Generates new random key pair for signing and returns it as an object with publicKey and secretKey members:
{
publicKey: ..., // Uint8Array with 32-byte public key
secretKey: ... // Uint8Array with 64-byte secret key
}
Returns a signing key pair with public key corresponding to the given 64-byte secret key. The secret key must have been generated by nacl.sign.keyPair or nacl.sign.keyPair.fromSeed.
Returns a new signing key pair generated deterministically from a 32-byte seed. The seed must contain enough entropy to be secure. This method is not recommended for general use: instead, use nacl.sign.keyPair to generate a new key pair from a random seed.
Signs the message using the secret key and returns a signed message.
Verifies the signed message and returns the message without signature.
Returns null if verification failed.
Signs the message using the secret key and returns a signature.
Verifies the signature for the message and returns true if verification succeeded or false if it failed.
Length of signing public key in bytes.
Length of signing secret key in bytes.
Length of seed for nacl.sign.keyPair.fromSeed in bytes.
Length of signature in bytes.
Implements SHA-512.
Returns SHA-512 hash of the message.
Length of hash in bytes.
Returns a Uint8Array of the given length containing random bytes of cryptographic quality.
Implementation note
TweetNaCl.js uses the following methods to generate random bytes, depending on the platform it runs on:
window.crypto.getRandomValues (WebCrypto standard)window.msCrypto.getRandomValues (Internet Explorer 11)crypto.randomBytes (Node.js)If the platform doesn’t provide a suitable PRNG, the following functions, which require random numbers, will throw exception:
nacl.randomBytesnacl.box.keyPairnacl.sign.keyPairOther functions are deterministic and will continue working.
If a platform you are targeting doesn’t implement secure random number generator, but you somehow have a cryptographically-strong source of entropy (not Math.random!), and you know what you are doing, you can plug it into TweetNaCl.js like this:
nacl.setPRNG(function(x, n) {
// ... copy n random bytes into x ...
});
Note that nacl.setPRNG completely replaces internal random byte generator with the one provided.
Compares x and y in constant time and returns true if their lengths are non-zero and equal, and their contents are equal.
Returns false if either of the arguments has zero length, or arguments have different lengths, or their contents differ.
TweetNaCl.js supports modern browsers that have a cryptographically secure pseudorandom number generator and typed arrays, including the latest versions of:
Other systems:
Install NPM modules needed for development:
npm install
To build minified versions:
npm run build
Tests use minified version, so make sure to rebuild it every time you change nacl.js or nacl-fast.js.
To run tests in Node.js:
npm run test-node
By default all tests described here work on nacl.min.js. To test other versions, set environment variable NACL_SRC to the file name you want to test. For example, the following command will test fast minified version:
$ NACL_SRC=nacl-fast.min.js npm run test-node
To run full suite of tests in Node.js, including comparing outputs of JavaScript port to outputs of the original C version:
npm run test-node-all
To prepare tests for browsers:
npm run build-test-browser
and then open test/browser/test.html (or test/browser/test-fast.html) to run them.
To run headless browser tests with tape-run (powered by Electron):
npm run test-browser
(If you get Error: spawn ENOENT, install xvfb: sudo apt-get install xvfb.)
To run tests in both Node and Electron:
npm test
To run benchmarks in Node.js:
npm run bench
$ NACL_SRC=nacl-fast.min.js npm run bench
To run benchmarks in a browser, open test/benchmark/bench.html (or test/benchmark/bench-fast.html).
For reference, here are benchmarks from MacBook Pro (Retina, 13-inch, Mid 2014) laptop with 2.6 GHz Intel Core i5 CPU (Intel) in Chrome 53/OS X and Xiaomi Redmi Note 3 smartphone with 1.8 GHz Qualcomm Snapdragon 650 64-bit CPU (ARM) in Chrome 52/Android:
| nacl.js Intel | nacl-fast.js Intel | nacl.js ARM | nacl-fast.js ARM | |
|---|---|---|---|---|
| salsa20 | 1.3 MB/s | 128 MB/s | 0.4 MB/s | 43 MB/s |
| poly1305 | 13 MB/s | 171 MB/s | 4 MB/s | 52 MB/s |
| hash | 4 MB/s | 34 MB/s | 0.9 MB/s | 12 MB/s |
| secretbox 1K | 1113 op/s | 57583 op/s | 334 op/s | 14227 op/s |
| box 1K | 145 op/s | 718 op/s | 37 op/s | 368 op/s |
| scalarMult | 171 op/s | 733 op/s | 56 op/s | 380 op/s |
| sign | 77 op/s | 200 op/s | 20 op/s | 61 op/s |
| sign.open | 39 op/s | 102 op/s | 11 op/s | 31 op/s |
(You can run benchmarks on your devices by clicking on the links at the bottom of the home page).
In short, with nacl-fast.js and 1024-byte messages you can expect to encrypt and authenticate more than 57000 messages per second on a typical laptop or more than 14000 messages per second on a $170 smartphone, sign about 200 and verify 100 messages per second on a laptop or 60 and 30 messages per second on a smartphone, per CPU core (with Web Workers you can do these operations in parallel), which is good enough for most applications.
See AUTHORS.md file.
crypto_authSome notable users of TweetNaCl.js:
a javascript implementation of LinkedList & DoublyLinkedList.
| Linked List |
|
| Doubly Linked List |
|
inserts a node at the beginning of the list.
| params | |
|---|---|
| name | type |
| value | object |
| return | decsription | |
|---|---|---|
| LinkedList | LinkedListNode | the inserted node |
| DoublyLinkedList | DoublyLinkedListNode | |
| runtime |
|---|
| O(1) |
linkedList.insertFirst(1);
const head1 = linkedList.insertFirst(2);
console.log(head1.getValue()); // 2
doublyLinkedList.insertFirst(1);
const head2 = doublyLinkedList.insertFirst(2);
console.log(head2.getValue()); // 2inserts a node at the end of the list.
| params | |
|---|---|
| name | type |
| value | object |
| return | decsription | |
|---|---|---|
| LinkedList | LinkedListNode | the inserted node |
| DoublyLinkedList | DoublyLinkedListNode | |
| runtime | |
|---|---|
| LinkedList | O(n) |
| DoublyLinkedList | O(1) |
linkedList.insertLast(3);
const last1 = linkedList.insertLast(4);
console.log(last1.getValue()); // 4
console.log(last1.getNext()); // null
doublyLinkedList.insertLast(3);
const last2 = doublyLinkedList.insertLast(4);
console.log(last2.getValue()); // 4
console.log(last2.getPrev().getValue()); // 3inserts a node at specific position of the list. First (head) node is at position 0.
| params | |
|---|---|
| name | type |
| value | object |
| position | number |
| return | description | |
|---|---|---|
| LinkedList | LinkedListNode | the inserted node |
| DoublyLinkedList | DoublyLinkedListNode | |
| runtime |
|---|
| O(n) |
const node1 = linkedList.insertAt(5, 2); // node1.getValue() = 5
const node2 = doublyLinkedList.insertAt(5, 2); // node2.getValue() = 5Loop on the linked list from beginning to end, and pass each node to the callback.
| params | |
|---|---|
| name | type |
| cb | function |
| runtime |
|---|
| O(n) |
linkedList.forEach((node) => console.log(node.getValue()));
/*
2
1
5
3
4
*/
doublyLinkedList.forEach((node) => console.log(node.getValue()));
/*
2
1
5
3
4
*/Only in DoublyLinkedList. Loop on the doubly linked list from end to beginning, and pass each node to the callback.
| params | |
|---|---|
| name | type |
| cb | function |
| runtime |
|---|
| O(n) |
returns the first node that returns true from the callback or null if nothing found.
| params | |
|---|---|
| name | type |
| cb | function |
| return | description | |
|---|---|---|
| LinkedList | LinkedListNode | the first found node |
| DoublyLinkedList | DoublyLinkedListNode | |
| runtime |
|---|
| O(n) |
const node1 = linkedList.find((node) => node.getValue() === 5);
console.log(node1.getValue()); // 5
console.log(node1.getNext().getValue()); // 3
const node2 = doublyLinkedList.find((node) => node.getValue() === 5);
console.log(node2.getValue()); // 5
console.log(node2.getNext().getValue()); // 3
console.log(node2.getPrev().getValue()); // 1returns a filtered list of all the nodes that returns true from the callback.
| params | |
|---|---|
| name | type |
| cb | function |
| return | |
|---|---|
| LinkedList | LinkedListNode |
| DoublyLinkedList | DoublyLinkedListNode |
| runtime |
|---|
| O(n) |
const filterLinkedList = linkedList.filter((node) => node.getValue() > 2);
filterLinkedList.forEach((node) => console.log(node.getValue()));
/*
5
3
4
*/
const filteredDoublyLinkedList = doublyLinkedList.filter((node) => node.getValue() > 2);
filteredDoublyLinkedList.forEach((node) => console.log(node.getValue()));
/*
5
3
4
*/converts the linked list into an array.
| return |
|---|
| array |
| runtime |
|---|
| O(n) |
console.log(linkedList.toArray()); // [2, 1, 5, 3, 4]
console.log(doublyLinkedList.toArray()); // [2, 1, 5, 3, 4]checks if the linked list is empty.
| return |
|---|
| boolean |
| runtime |
|---|
| O(1) |
returns the head node in the linked list.
| return | |
|---|---|
| LinkedList | LinkedListNode |
| DoublyLinkedList | DoublyLinkedListNode |
| runtime |
|---|
| O(1) |
console.log(linkedList.head().getValue()); // 2
console.log(doublyLinkedList.head().getValue()); // 2returns the tail node of the doubly linked list.
| return |
|---|
| DoublyLinkedListNode |
| runtime |
|---|
| O(1) |
returns the number of nodes in the linked list.
| return |
|---|
| number |
| runtime |
|---|
| O(1) |
removes the first (head) node of the list.
| return | description |
|---|---|
| boolean | true if a node has been removed |
| runtime |
|---|
| O(1) |
removes the last node from the list.
| return | description |
|---|---|
| boolean | true if a node has been removed |
| runtime | |
|---|---|
| LinkedList | O(n) |
| DoublyLinkedList | O(1) |
removes a node at a specific position. First (head) node is at position 0.
| params | |
|---|---|
| name | type |
| position | number |
| return | description |
|---|---|
| boolean | true if a node has been removed |
| runtime |
|---|
| O(n) |
Loop on the linked list from beginning to end, removes the nodes that returns true from the callback.
| params | |
|---|---|
| name | type |
| cb | function |
| return | description |
|---|---|
| number | number of removed nodes |
| runtime |
|---|
| O(n) |
linkedList.removeEach((node) => node.getValue() > 1); // 1
console.log(linkedList.toArray()); // [1]
doublyLinkedList.removeEach((node) => node.getValue() > 1); // 1
console.log(doublyLinkedList.toArray()); // [1]remove all nodes in the linked list.
| runtime |
|---|
| O(1) |
linkedList.clear();
console.log(linkedList.count()); // 0
console.log(linkedList.head()); // null
doublyLinkedList.clear();
console.log(linkedList.count()); // 0
console.log(doublyLinkedList.head()); // null
console.log(doublyLinkedList.tail()); // nullreturns the node’s value.
| return |
|---|
| object |
returns the next connected node or null if it’s the last node.
| return |
|---|
| LinkedListNode |
returns the node’s value.
| return |
|---|
| object |
returns the previous connected node or null if it’s the first node.
| return |
|---|
| DoublyLinkedListNode |
returns the next connected node or null if it’s the last node.
| return |
|---|
| DoublyLinkedListNode |
grunt build
Pass two numbers, get a regex-compatible source string for matching ranges. Validated against more than 2.78 million test assertions.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
What does this do?
This libary generates the source string to be passed to new RegExp() for matching a range of numbers.
Example
A string is returned so that you can do whatever you need with it before passing it to new RegExp() (like adding ^ or $ boundaries, defining flags, or combining it another string).
Why use this library?
Creating regular expressions for matching numbers gets deceptively complicated pretty fast.
For example, let’s say you need a validation regex for matching part of a user-id, postal code, social security number, tax id, etc:
1 => /1/ (easy enough)1 through 5 => /[1-5]/ (not bad…)1 or 5 => /(1|5)/ (still easy…)1 through 50 => /([1-9]|[1-4][0-9]|50)/ (uh-oh…)1 through 55 => /([1-9]|[1-4][0-9]|5[0-5])/ (no prob, I can do this…)1 through 555 => /([1-9]|[1-9][0-9]|[1-4][0-9]{2}|5[0-4][0-9]|55[0-5])/ (maybe not…)0001 through 5555 => /(0{3}[1-9]|0{2}[1-9][0-9]|0[1-9][0-9]{2}|[1-4][0-9]{3}|5[0-4][0-9]{2}|55[0-4][0-9]|555[0-5])/ (okay, I get the point!)The numbers are contrived, but they’re also really basic. In the real world you might need to generate a regex on-the-fly for validation.
Learn more
If you’re interested in learning more about character classes and other regex features, I personally have always found regular-expressions.info to be pretty useful.
As of April 07, 2019, this library runs >1m test assertions against generated regex-ranges to provide brute-force verification that results are correct.
Tests run in ~280ms on my MacBook Pro, 2.5 GHz Intel Core i7.
Generated regular expressions are optimized:
? conditionals when number(s) or range(s) can be positive or negativeAdd this library to your javascript application with the following line of code
The main export is a function that takes two integers: the min value and max value (formatted as strings or numbers).
const source = toRegexRange('15', '95');
//=> 1[5-9]|[2-8][0-9]|9[0-5]
const regex = new RegExp(`^${source}$`);
console.log(regex.test('14')); //=> false
console.log(regex.test('50')); //=> true
console.log(regex.test('94')); //=> true
console.log(regex.test('96')); //=> falseType: boolean
Deafault: undefined
Wrap the returned value in parentheses when there is more than one regex condition. Useful when you’re dynamically generating ranges.
console.log(toRegexRange('-10', '10'));
//=> -[1-9]|-?10|[0-9]
console.log(toRegexRange('-10', '10', { capture: true }));
//=> (-[1-9]|-?10|[0-9])Type: boolean
Deafault: undefined
Use the regex shorthand for [0-9]:
console.log(toRegexRange('0', '999999'));
//=> [0-9]|[1-9][0-9]{1,5}
console.log(toRegexRange('0', '999999', { shorthand: true }));
//=> \d|[1-9]\d{1,5}Type: boolean
Default: true
This option relaxes matching for leading zeros when when ranges are zero-padded.
const source = toRegexRange('-0010', '0010');
const regex = new RegExp(`^${source}$`);
console.log(regex.test('-10')); //=> true
console.log(regex.test('-010')); //=> true
console.log(regex.test('-0010')); //=> true
console.log(regex.test('10')); //=> true
console.log(regex.test('010')); //=> true
console.log(regex.test('0010')); //=> trueWhen relaxZeros is false, matching is strict:
const source = toRegexRange('-0010', '0010', { relaxZeros: false });
const regex = new RegExp(`^${source}$`);
console.log(regex.test('-10')); //=> false
console.log(regex.test('-010')); //=> false
console.log(regex.test('-0010')); //=> true
console.log(regex.test('10')); //=> false
console.log(regex.test('010')); //=> false
console.log(regex.test('0010')); //=> true| Range | Result | Compile time |
|---|---|---|
toRegexRange(-10, 10) |
-[1-9]\|-?10\|[0-9] |
132μs |
toRegexRange(-100, -10) |
-1[0-9]\|-[2-9][0-9]\|-100 |
50μs |
toRegexRange(-100, 100) |
-[1-9]\|-?[1-9][0-9]\|-?100\|[0-9] |
42μs |
toRegexRange(001, 100) |
0{0,2}[1-9]\|0?[1-9][0-9]\|100 |
109μs |
toRegexRange(001, 555) |
0{0,2}[1-9]\|0?[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] |
51μs |
toRegexRange(0010, 1000) |
0{0,2}1[0-9]\|0{0,2}[2-9][0-9]\|0?[1-9][0-9]{2}\|1000 |
31μs |
toRegexRange(1, 50) |
[1-9]\|[1-4][0-9]\|50 |
24μs |
toRegexRange(1, 55) |
[1-9]\|[1-4][0-9]\|5[0-5] |
23μs |
toRegexRange(1, 555) |
[1-9]\|[1-9][0-9]\|[1-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] |
30μs |
toRegexRange(1, 5555) |
[1-9]\|[1-9][0-9]{1,2}\|[1-4][0-9]{3}\|5[0-4][0-9]{2}\|55[0-4][0-9]\|555[0-5] |
43μs |
toRegexRange(111, 555) |
11[1-9]\|1[2-9][0-9]\|[2-4][0-9]{2}\|5[0-4][0-9]\|55[0-5] |
38μs |
toRegexRange(29, 51) |
29\|[34][0-9]\|5[01] |
24μs |
toRegexRange(31, 877) |
3[1-9]\|[4-9][0-9]\|[1-7][0-9]{2}\|8[0-6][0-9]\|87[0-7] |
32μs |
toRegexRange(5, 5) |
5 |
8μs |
toRegexRange(5, 6) |
5\|6 |
11μs |
toRegexRange(1, 2) |
1\|2 |
6μs |
toRegexRange(1, 5) |
[1-5] |
15μs |
toRegexRange(1, 10) |
[1-9]\|10 |
22μs |
toRegexRange(1, 100) |
[1-9]\|[1-9][0-9]\|100 |
25μs |
toRegexRange(1, 1000) |
[1-9]\|[1-9][0-9]{1,2}\|1000 |
31μs |
toRegexRange(1, 10000) |
[1-9]\|[1-9][0-9]{1,3}\|10000 |
34μs |
toRegexRange(1, 100000) |
[1-9]\|[1-9][0-9]{1,4}\|100000 |
36μs |
toRegexRange(1, 1000000) |
[1-9]\|[1-9][0-9]{1,5}\|1000000 |
42μs |
toRegexRange(1, 10000000) |
[1-9]\|[1-9][0-9]{1,6}\|10000000 |
42μs |
Order of arguments
When the min is larger than the max, values will be flipped to create a valid range:
Is effectively flipped to:
Steps / increments
This library does not support steps (increments). A pr to add support would be welcome.
New features
Adds support for zero-padding!
Optimizations
Repeating ranges are now grouped using quantifiers. rocessing time is roughly the same, but the generated regex is much smaller, which should result in faster matching.
Inspired by the python library range-regex.
Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
step to… more | homepage| Commits | Contributor |
|---|---|
| 63 | jonschlinkert |
| 3 | doowb |
| 2 | realityking |
Jon Schlinkert
Please consider supporting me on Patreon, or start your own Patreon page!
This file was generated by verb-generate-readme, v0.8.0, on April 07, 2019. # Chokidar
A neat wrapper around Node.js fs.watch / fs.watchFile / FSEvents.
Version 3 is out! Check out our blog post about it: Chokidar 3: How to save 32TB of traffic every week
Node.js fs.watch:
rename.Node.js fs.watchFile:
Chokidar resolves these problems.
Initially made for Brunch (an ultra-swift web app build tool), it is now used in Microsoft’s Visual Studio Code, gulp, karma, PM2, browserify, webpack, BrowserSync, and many others. It has proven itself in production environments.
Chokidar does still rely on the Node.js core fs module, but when using fs.watch and fs.watchFile for watching, it normalizes the events it receives, often checking for truth by getting file stats and/or dir contents.
On MacOS, chokidar by default uses a native extension exposing the Darwin FSEvents API. This provides very efficient recursive watching compared with implementations like kqueue available on most *nix platforms. Chokidar still does have to do some work to normalize the events received that way as well.
On other platforms, the fs.watch-based implementation is the default, which avoids polling and keeps CPU usage down. Be advised that chokidar will initiate watchers recursively for everything within scope of the paths that have been specified, so be judicious about not wasting system resources by watching much more than needed.
Install with npm:
Then require and use it in your code:
const chokidar = require('chokidar');
// One-liner for current directory
chokidar.watch('.').on('all', (event, path) => {
console.log(event, path);
});// Example of a more typical implementation structure:
// Initialize watcher.
const watcher = chokidar.watch('file, dir, glob, or array', {
ignored: /(^|[\/\\])\../, // ignore dotfiles
persistent: true
});
// Something to use when events are received.
const log = console.log.bind(console);
// Add event listeners.
watcher
.on('add', path => log(`File ${path} has been added`))
.on('change', path => log(`File ${path} has been changed`))
.on('unlink', path => log(`File ${path} has been removed`));
// More possible events.
watcher
.on('addDir', path => log(`Directory ${path} has been added`))
.on('unlinkDir', path => log(`Directory ${path} has been removed`))
.on('error', error => log(`Watcher error: ${error}`))
.on('ready', () => log('Initial scan complete. Ready for changes'))
.on('raw', (event, path, details) => { // internal
log('Raw event info:', event, path, details);
});
// 'add', 'addDir' and 'change' events also receive stat() results as second
// argument when available: https://nodejs.org/api/fs.html#fs_class_fs_stats
watcher.on('change', (path, stats) => {
if (stats) console.log(`File ${path} changed size to ${stats.size}`);
});
// Watch new files.
watcher.add('new-file');
watcher.add(['new-file-2', 'new-file-3', '**/other-file*']);
// Get list of actual paths being watched on the filesystem
var watchedPaths = watcher.getWatched();
// Un-watch some files.
await watcher.unwatch('new-file*');
// Stop watching.
// The method is async!
watcher.close().then(() => console.log('closed'));
// Full list of options. See below for descriptions.
// Do not use this example!
chokidar.watch('file', {
persistent: true,
ignored: '*.txt',
ignoreInitial: false,
followSymlinks: true,
cwd: '.',
disableGlobbing: false,
usePolling: false,
interval: 100,
binaryInterval: 300,
alwaysStat: false,
depth: 99,
awaitWriteFinish: {
stabilityThreshold: 2000,
pollInterval: 100
},
atomic: true // or a custom 'atomicity delay', in milliseconds (default 100)
});chokidar.watch(paths, [options])
paths (string or array of strings). Paths to files, dirs to be watched recursively, or glob patterns.
\), because that’s how they work by the standard — you’ll need to replace them with forward slashes (/).options (object) Options object as defined below:persistent (default: true). Indicates whether the process should continue to run as long as files are being watched. If set to false when using fsevents to watch, no more events will be emitted after ready, even if the process continues to run.ignored (anymatch-compatible definition) Defines files/paths to be ignored. The whole relative or absolute path is tested, not just filename. If a function with two arguments is provided, it gets called twice per path - once with a single argument (the path), second time with two arguments (the path and the fs.Stats object of that path).ignoreInitial (default: false). If set to false then add/addDir events are also emitted for matching paths while instantiating the watching as chokidar discovers these file paths (before the ready event).followSymlinks (default: true). When false, only the symlinks themselves will be watched for changes instead of following the link references and bubbling events through the link’s path.cwd (no default). The base directory from which watch paths are to be derived. Paths emitted with events will be relative to this.disableGlobbing (default: false). If set to true then the strings passed to .watch() and .add() are treated as literal path names, even if they look like globs.usePolling (default: false). Whether to use fs.watchFile (backed by polling), or fs.watch. If polling leads to high CPU utilization, consider setting this to false. It is typically necessary to set this to true to successfully watch files over a network, and it may be necessary to successfully watch files in other non-standard situations. Setting to true explicitly on MacOS overrides the useFsEvents default. You may also set the CHOKIDAR_USEPOLLING env variable to true (1) or false (0) in order to override this option.usePolling: true)
interval (default: 100). Interval of file system polling, in milliseconds. You may also set the CHOKIDAR_INTERVAL env variable to override this option.binaryInterval (default: 300). Interval of file system polling for binary files. (see list of binary extensions)useFsEvents (default: true on MacOS). Whether to use the fsevents watching interface if available. When set to true explicitly and fsevents is available this supercedes the usePolling setting. When set to false on MacOS, usePolling: true becomes the default.alwaysStat (default: false). If relying upon the fs.Stats object that may get passed with add, addDir, and change events, set this to true to ensure it is provided even in cases where it wasn’t already available from the underlying watch events.depth (default: undefined). If set, limits how many levels of subdirectories will be traversed.awaitWriteFinish (default: false). By default, the add event will fire when a file first appears on disk, before the entire file has been written. Furthermore, in some cases some change events will be emitted while the file is being written. In some cases, especially when watching for large files there will be a need to wait for the write operation to finish before responding to a file creation or modification. Setting awaitWriteFinish to true (or a truthy value) will poll file size, holding its add and change events until the size does not change for a configurable amount of time. The appropriate duration setting is heavily dependent on the OS and hardware. For accurate detection this parameter should be relatively high, making file watching much less responsive. Use with caution.
options.awaitWriteFinish can be set to an object in order to adjust timing params:awaitWriteFinish.stabilityThreshold (default: 2000). Amount of time in milliseconds for a file size to remain constant before emitting its event.awaitWriteFinish.pollInterval (default: 100). File size polling interval, in milliseconds.atomic (default: true if useFsEvents and usePolling are false). Automatically filters out artifacts that occur when using editors that use “atomic writes” instead of writing directly to the source file. If a file is re-added within 100 ms of being deleted, Chokidar emits a change event rather than unlink then add. If the default of 100 ms does not work well for you, you can override it by setting atomic to a custom value, in milliseconds.chokidar.watch() produces an instance of FSWatcher. Methods of FSWatcher:
.add(path / paths): Add files, directories, or glob patterns for tracking. Takes an array of strings or just one string..on(event, callback): Listen for an FS event. Available events: add, addDir, change, unlink, unlinkDir, ready, raw, error. Additionally all is available which gets emitted with the underlying event name and path for every event other than ready, raw, and error. raw is internal, use it carefully..unwatch(path / paths): Stop watching files, directories, or glob patterns. Takes an array of strings or just one string. Use with await to ensure bugs don’t happen..close(): async Removes all listeners from watched files. Asynchronous, returns Promise..getWatched(): Returns an object representing all the paths on the file system being watched by this FSWatcher instance. The object’s keys are all the directories (using absolute paths unless the cwd option was used), and the values are arrays of the names of the items contained in each directory.If you need a CLI interface for your file watching, check out chokidar-cli, allowing you to execute a command on each change, or get a stdio stream of change events.
npm WARN optional dep failed, continuing fsevents@n.n.n
npm handles optional dependencies and is not indicative of a problem. Even if accompanied by other related error messages, Chokidar should function properly.TypeError: fsevents is not a constructor
rm -rf node_modules package-lock.json yarn.lock && npm install, or update your dependency that uses chokidar.ENOSP error on Linux, like this:
bash: cannot set terminal process group (-1): Inappropriate ioctl for device bash: no job control in this shell Error: watch /home/ ENOSPCecho fs.inotify.max_user_watches=524288 | sudo tee -a /etc/sysctl.conf && sudo sysctl -pWhy was chokidar named this way? What’s the meaning behind it?
Chowkidar is a transliteration of a Hindi word meaning ‘watchman, gatekeeper’, चौकीदार. This ultimately comes from Sanskrit _ चतुष्क_ (crossway, quadrangle, consisting-of-four).
Given some data, jsesc returns a stringified representation of that data. jsesc is similar to JSON.stringify() except:
For any input, jsesc generates the shortest possible valid printable-ASCII-only output. Here’s an online demo.
jsesc’s output can be used instead of JSON.stringify’s to avoid mojibake and other encoding issues, or even to avoid errors when passing JSON-formatted data (which may contain U+2028 LINE SEPARATOR, U+2029 PARAGRAPH SEPARATOR, or lone surrogates) to a JavaScript parser or an UTF-8 encoder.
Via npm:
In Node.js:
jsesc(value, options)This function takes a value and returns an escaped version of the value where any characters that are not printable ASCII symbols are escaped using the shortest possible (but valid) escape sequences for use in JavaScript strings. The first supported value type is strings:
jsesc('Ich ♥ Bücher');
// → 'Ich \\u2665 B\\xFCcher'
jsesc('foo 𝌆 bar');
// → 'foo \\uD834\\uDF06 bar'Instead of a string, the value can also be an array, an object, a map, a set, or a buffer. In such cases, jsesc returns a stringified version of the value where any characters that are not printable ASCII symbols are escaped in the same way.
// Escaping an array
jsesc([
'Ich ♥ Bücher', 'foo 𝌆 bar'
]);
// → '[\'Ich \\u2665 B\\xFCcher\',\'foo \\uD834\\uDF06 bar\']'
// Escaping an object
jsesc({
'Ich ♥ Bücher': 'foo 𝌆 bar'
});
// → '{\'Ich \\u2665 B\\xFCcher\':\'foo \\uD834\\uDF06 bar\'}'The optional options argument accepts an object with the following options:
quotesThe default value for the quotes option is single. This means that any occurrences of in the input string are escaped as \, so that the output can be used in a string literal wrapped in single quotes.
jsesc('`Lorem` ipsum "dolor" sit \'amet\' etc.');
// → 'Lorem ipsum "dolor" sit \\\'amet\\\' etc.'
jsesc('`Lorem` ipsum "dolor" sit \'amet\' etc.', {
'quotes': 'single'
});
// → '`Lorem` ipsum "dolor" sit \\\'amet\\\' etc.'
// → "`Lorem` ipsum \"dolor\" sit \\'amet\\' etc."If you want to use the output as part of a string literal wrapped in double quotes, set the quotes option to 'double'.
jsesc('`Lorem` ipsum "dolor" sit \'amet\' etc.', {
'quotes': 'double'
});
// → '`Lorem` ipsum \\"dolor\\" sit \'amet\' etc.'
// → "`Lorem` ipsum \\\"dolor\\\" sit 'amet' etc."If you want to use the output as part of a template literal (i.e. wrapped in backticks), set the quotes option to 'backtick'.
jsesc('`Lorem` ipsum "dolor" sit \'amet\' etc.', {
'quotes': 'backtick'
});
// → '\\`Lorem\\` ipsum "dolor" sit \'amet\' etc.'
// → "\\`Lorem\\` ipsum \"dolor\" sit 'amet' etc."
// → `\\\`Lorem\\\` ipsum "dolor" sit 'amet' etc.`This setting also affects the output for arrays and objects:
jsesc({ 'Ich ♥ Bücher': 'foo 𝌆 bar' }, {
'quotes': 'double'
});
// → '{"Ich \\u2665 B\\xFCcher":"foo \\uD834\\uDF06 bar"}'
jsesc([ 'Ich ♥ Bücher', 'foo 𝌆 bar' ], {
'quotes': 'double'
});
// → '["Ich \\u2665 B\\xFCcher","foo \\uD834\\uDF06 bar"]'numbersThe default value for the numbers option is 'decimal'. This means that any numeric values are represented using decimal integer literals. Other valid options are binary, octal, and hexadecimal, which result in binary integer literals, octal integer literals, and hexadecimal integer literals, respectively.
jsesc(42, {
'numbers': 'binary'
});
// → '0b101010'
jsesc(42, {
'numbers': 'octal'
});
// → '0o52'
jsesc(42, {
'numbers': 'decimal'
});
// → '42'
jsesc(42, {
'numbers': 'hexadecimal'
});
// → '0x2A'wrapThe wrap option takes a boolean value (true or false), and defaults to false (disabled). When enabled, the output is a valid JavaScript string literal wrapped in quotes. The type of quotes can be specified through the quotes setting.
jsesc('Lorem ipsum "dolor" sit \'amet\' etc.', {
'quotes': 'single',
'wrap': true
});
// → '\'Lorem ipsum "dolor" sit \\\'amet\\\' etc.\''
// → "\'Lorem ipsum \"dolor\" sit \\\'amet\\\' etc.\'"
jsesc('Lorem ipsum "dolor" sit \'amet\' etc.', {
'quotes': 'double',
'wrap': true
});
// → '"Lorem ipsum \\"dolor\\" sit \'amet\' etc."'
// → "\"Lorem ipsum \\\"dolor\\\" sit \'amet\' etc.\""es6The es6 option takes a boolean value (true or false), and defaults to false (disabled). When enabled, any astral Unicode symbols in the input are escaped using ECMAScript 6 Unicode code point escape sequences instead of using separate escape sequences for each surrogate half. If backwards compatibility with ES5 environments is a concern, don’t enable this setting. If the json setting is enabled, the value for the es6 setting is ignored (as if it was false).
// By default, the `es6` option is disabled:
jsesc('foo 𝌆 bar 💩 baz');
// → 'foo \\uD834\\uDF06 bar \\uD83D\\uDCA9 baz'
// To explicitly disable it:
jsesc('foo 𝌆 bar 💩 baz', {
'es6': false
});
// → 'foo \\uD834\\uDF06 bar \\uD83D\\uDCA9 baz'
// To enable it:
jsesc('foo 𝌆 bar 💩 baz', {
'es6': true
});
// → 'foo \\u{1D306} bar \\u{1F4A9} baz'escapeEverythingThe escapeEverything option takes a boolean value (true or false), and defaults to false (disabled). When enabled, all the symbols in the output are escaped — even printable ASCII symbols.
jsesc('lolwat"foo\'bar', {
'escapeEverything': true
});
// → '\\x6C\\x6F\\x6C\\x77\\x61\\x74\\"\\x66\\x6F\\x6F\\\'\\x62\\x61\\x72'
// → "\\x6C\\x6F\\x6C\\x77\\x61\\x74\\\"\\x66\\x6F\\x6F\\'\\x62\\x61\\x72"This setting also affects the output for string literals within arrays and objects.
minimalThe minimal option takes a boolean value (true or false), and defaults to false (disabled). When enabled, only a limited set of symbols in the output are escaped:
\0\b\t\n\f\r\\\u2028\u2029quotes option)Note: with this option enabled, jsesc output is no longer guaranteed to be ASCII-safe.
'minimal': false });isScriptContextThe isScriptContext option takes a boolean value (true or false), and defaults to false (disabled). When enabled, occurrences of </script and </style in the output are escaped as <\/script and <\/style, and <!-- is escaped as \x3C!-- (or \u003C!-- when the json option is enabled). This setting is useful when jsesc’s output ends up as part of a <script> or <style> element in an HTML document.
compactThe compact option takes a boolean value (true or false), and defaults to true (enabled). When enabled, the output for arrays and objects is as compact as possible; it’s not formatted nicely.
jsesc({ 'Ich ♥ Bücher': 'foo 𝌆 bar' }, {
'compact': true // this is the default
});
// → '{\'Ich \u2665 B\xFCcher\':\'foo \uD834\uDF06 bar\'}'
jsesc({ 'Ich ♥ Bücher': 'foo 𝌆 bar' }, {
'compact': false
});
// → '{\n\t\'Ich \u2665 B\xFCcher\': \'foo \uD834\uDF06 bar\'\n}'
jsesc([ 'Ich ♥ Bücher', 'foo 𝌆 bar' ], {
'compact': false
});
// → '[\n\t\'Ich \u2665 B\xFCcher\',\n\t\'foo \uD834\uDF06 bar\'\n]'This setting has no effect on the output for strings.
indentThe indent option takes a string value, and defaults to '\t'. When the compact setting is enabled (true), the value of the indent option is used to format the output for arrays and objects.
jsesc({ 'Ich ♥ Bücher': 'foo 𝌆 bar' }, {
'compact': false,
'indent': '\t' // this is the default
});
// → '{\n\t\'Ich \u2665 B\xFCcher\': \'foo \uD834\uDF06 bar\'\n}'
jsesc({ 'Ich ♥ Bücher': 'foo 𝌆 bar' }, {
'compact': false,
'indent': ' '
});
// → '{\n \'Ich \u2665 B\xFCcher\': \'foo \uD834\uDF06 bar\'\n}'
jsesc([ 'Ich ♥ Bücher', 'foo 𝌆 bar' ], {
'compact': false,
'indent': ' '
});
// → '[\n \'Ich \u2665 B\xFCcher\',\n\ t\'foo \uD834\uDF06 bar\'\n]'This setting has no effect on the output for strings.
indentLevelThe indentLevel option takes a numeric value, and defaults to 0. It represents the current indentation level, i.e. the number of times the value of the indent option is repeated.
jsesc(['a', 'b', 'c'], {
'compact': false,
'indentLevel': 1
});
// → '[\n\t\t\'a\',\n\t\t\'b\',\n\t\t\'c\'\n\t]'
jsesc(['a', 'b', 'c'], {
'compact': false,
'indentLevel': 2
});
// → '[\n\t\t\t\'a\',\n\t\t\t\'b\',\n\t\t\t\'c\'\n\t\t]'jsonThe json option takes a boolean value (true or false), and defaults to false (disabled). When enabled, the output is valid JSON. Hexadecimal character escape sequences and the \v or \0 escape sequences are not used. Setting json: true implies quotes: 'double', wrap: true, es6: false, although these values can still be overridden if needed — but in such cases, the output won’t be valid JSON anymore.
jsesc('foo\x00bar\xFF\uFFFDbaz', {
'json': true
});
// → '"foo\\u0000bar\\u00FF\\uFFFDbaz"'
jsesc({ 'foo\x00bar\xFF\uFFFDbaz': 'foo\x00bar\xFF\uFFFDbaz' }, {
'json': true
});
// → '{"foo\\u0000bar\\u00FF\\uFFFDbaz":"foo\\u0000bar\\u00FF\\uFFFDbaz"}'
jsesc([ 'foo\x00bar\xFF\uFFFDbaz', 'foo\x00bar\xFF\uFFFDbaz' ], {
'json': true
});
// → '["foo\\u0000bar\\u00FF\\uFFFDbaz","foo\\u0000bar\\u00FF\\uFFFDbaz"]'
// Values that are acceptable in JSON but aren’t strings, arrays, or object
// literals can’t be escaped, so they’ll just be preserved:
'json': true
});
// → '["foo\\u0000bar",[1,"\\u00A9",{"foo":true,"qux":null}],42]'
// Values that aren’t allowed in JSON are run through `JSON.stringify()`:
jsesc([ undefined, -Infinity ], {
'json': true
});
// → '[null,null]'Note: Using this option on objects or arrays that contain non-string values relies on JSON.stringify(). For legacy environments like IE ≤ 7, use a JSON polyfill.
lowercaseHexThe lowercaseHex option takes a boolean value (true or false), and defaults to false (disabled). When enabled, any alphabetical hexadecimal digits in escape sequences as well as any hexadecimal integer literals (see the numbers option) in the output are in lowercase.
jsesc('Ich ♥ Bücher', {
'lowercaseHex': true
});
// → 'Ich \\u2665 B\\xfccher'
// ^^
jsesc(42, {
'numbers': 'hexadecimal',
'lowercaseHex': true
});
// → '0x2a'
// ^^jsesc.versionA string representing the semantic version number.
jsesc binaryTo use the jsesc binary in your shell, simply install jsesc globally using npm:
After that you’re able to escape strings from the command line:
To escape arrays or objects containing string values, use the -o/--object option:
To prettify the output in such cases, use the -p/--pretty option:
$ jsesc --pretty '{ "föo": "♥", "bår": "𝌆 baz" }'
{
'f\xF6o': '\u2665',
'b\xE5r': '\uD834\uDF06 baz'
}For valid JSON output, use the -j/--json option:
$ jsesc --json --pretty '{ "föo": "♥", "bår": "𝌆 baz" }'
{
"f\u00F6o": "\u2665",
"b\u00E5r": "\uD834\uDF06 baz"
}Read a local JSON file, escape any non-ASCII symbols, and save the result to a new file:
Or do the same with an online JSON file:
See jsesc --help for the full list of options.
As of v2.0.0, jsesc supports Node.js v4+ only.
Older versions (up to jsesc v1.3.0) support Chrome 27, Firefox 3, Safari 4, Opera 10, IE 6, Node.js v6.0.0, Narwhal 0.3.2, RingoJS 0.8-0.11, PhantomJS 1.9.0, and Rhino 1.7RC4. Note: Using the json option on objects or arrays that contain non-string values relies on JSON.parse(). For legacy environments like IE ≤ 7, use a JSON polyfill.
| Mathias Bynens |
A querystring parsing and stringifying library with some added security.
Lead Maintainer: Jordan Harband
The qs module was originally created and maintained by TJ Holowaychuk.
var qs = require('qs');
var assert = require('assert');
var obj = qs.parse('a=c');
assert.deepEqual(obj, { a: 'c' });
var str = qs.stringify(obj);
assert.equal(str, 'a=c');qs allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets []. For example, the string 'foo[bar]=baz' converts to:
When using the plainObjects option the parsed value is returned as a null object, created via Object.create(null) and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:
var nullObject = qs.parse('a[hasOwnProperty]=b', { plainObjects: true });
assert.deepEqual(nullObject, { a: { hasOwnProperty: 'b' } });By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.
var protoObject = qs.parse('a[hasOwnProperty]=b', { allowPrototypes: true });
assert.deepEqual(protoObject, { a: { hasOwnProperty: 'b' } });URI encoded strings work too:
You can also nest your objects, like 'foo[bar][baz]=foobarbaz':
By default, when nesting objects qs will only parse up to 5 children deep. This means if you attempt to parse a string like 'a[b][c][d][e][f][g][h][i]=j' your resulting object will be:
var expected = {
a: {
b: {
c: {
d: {
e: {
f: {
'[g][h][i]': 'j'
}
}
}
}
}
}
};
var string = 'a[b][c][d][e][f][g][h][i]=j';
assert.deepEqual(qs.parse(string), expected);This depth can be overridden by passing a depth option to qs.parse(string, [options]):
var deep = qs.parse('a[b][c][d][e][f][g][h][i]=j', { depth: 1 });
assert.deepEqual(deep, { a: { b: { '[c][d][e][f][g][h][i]': 'j' } } });The depth limit helps mitigate abuse when qs is used to parse user input, and it is recommended to keep it a reasonably small number.
For similar reasons, by default qs will only parse up to 1000 parameters. This can be overridden by passing a parameterLimit option:
To bypass the leading question mark, use ignoreQueryPrefix:
var prefixed = qs.parse('?a=b&c=d', { ignoreQueryPrefix: true });
assert.deepEqual(prefixed, { a: 'b', c: 'd' });An optional delimiter can also be passed:
var delimited = qs.parse('a=b;c=d', { delimiter: ';' });
assert.deepEqual(delimited, { a: 'b', c: 'd' });Delimiters can be a regular expression too:
var regexed = qs.parse('a=b;c=d,e=f', { delimiter: /[;,]/ });
assert.deepEqual(regexed, { a: 'b', c: 'd', e: 'f' });Option allowDots can be used to enable dot notation:
var withDots = qs.parse('a.b=c', { allowDots: true });
assert.deepEqual(withDots, { a: { b: 'c' } });qs can also parse arrays using a similar [] notation:
You may specify an index as well:
Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, qs will compact a sparse array to only the existing values preserving their order:
Note that an empty string is also a value, and will be preserved:
var withEmptyString = qs.parse('a[]=&a[]=b');
assert.deepEqual(withEmptyString, { a: ['', 'b'] });
var withIndexedEmptyString = qs.parse('a[0]=b&a[1]=&a[2]=c');
assert.deepEqual(withIndexedEmptyString, { a: ['b', '', 'c'] });qs will also limit specifying indices in an array to a maximum index of 20. Any array members with an index of greater than 20 will instead be converted to an object with the index as the key:
This limit can be overridden by passing an arrayLimit option:
var withArrayLimit = qs.parse('a[1]=b', { arrayLimit: 0 });
assert.deepEqual(withArrayLimit, { a: { '1': 'b' } });To disable array parsing entirely, set parseArrays to false.
var noParsingArrays = qs.parse('a[]=b', { parseArrays: false });
assert.deepEqual(noParsingArrays, { a: { '0': 'b' } });If you mix notations, qs will merge the two items into an object:
var mixedNotation = qs.parse('a[0]=b&a[b]=c');
assert.deepEqual(mixedNotation, { a: { '0': 'b', b: 'c' } });You can also create arrays of objects:
When stringifying, qs by default URI encodes output. Objects are stringified as you would expect:
assert.equal(qs.stringify({ a: 'b' }), 'a=b');
assert.equal(qs.stringify({ a: { b: 'c' } }), 'a%5Bb%5D=c');This encoding can be disabled by setting the encode option to false:
var unencoded = qs.stringify({ a: { b: 'c' } }, { encode: false });
assert.equal(unencoded, 'a[b]=c');Encoding can be disabled for keys by setting the encodeValuesOnly option to true:
var encodedValues = qs.stringify(
{ a: 'b', c: ['d', 'e=f'], f: [['g'], ['h']] },
{ encodeValuesOnly: true }
);
assert.equal(encodedValues,'a=b&c[0]=d&c[1]=e%3Df&f[0][0]=g&f[1][0]=h');This encoding can also be replaced by a custom encoding method set as encoder option:
var encoded = qs.stringify({ a: { b: 'c' } }, { encoder: function (str) {
// Passed in values `a`, `b`, `c`
return // Return encoded string
}})(Note: the encoder option does not apply if encode is false)
Analogue to the encoder there is a decoder option for parse to override decoding of properties and values:
var decoded = qs.parse('x=z', { decoder: function (str) {
// Passed in values `x`, `z`
return // Return decoded string
}})Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases will be URI encoded during real usage.
When arrays are stringified, by default they are given explicit indices:
You may override this by setting the indices option to false:
You may use the arrayFormat option to specify the format of the output array:
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'indices' })
// 'a[0]=b&a[1]=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'brackets' })
// 'a[]=b&a[]=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'repeat' })
// 'a=b&a=c'When objects are stringified, by default they use bracket notation:
You may override this to use dot notation by setting the allowDots option to true:
Empty strings and null values will omit the value, but the equals sign (=) remains in place:
Key with no values (such as an empty object or array) will return nothing:
assert.equal(qs.stringify({ a: [] }), '');
assert.equal(qs.stringify({ a: {} }), '');
assert.equal(qs.stringify({ a: [{}] }), '');
assert.equal(qs.stringify({ a: { b: []} }), '');
assert.equal(qs.stringify({ a: { b: {}} }), '');Properties that are set to undefined will be omitted entirely:
The query string may optionally be prepended with a question mark:
The delimiter may be overridden with stringify as well:
If you only want to override the serialization of Date objects, you can provide a serializeDate option:
var date = new Date(7);
assert.equal(qs.stringify({ a: date }), 'a=1970-01-01T00:00:00.007Z'.replace(/:/g, '%3A'));
assert.equal(
qs.stringify({ a: date }, { serializeDate: function (d) { return d.getTime(); } }),
'a=7'
);You may use the sort option to affect the order of parameter keys:
function alphabeticalSort(a, b) {
return a.localeCompare(b);
}
assert.equal(qs.stringify({ a: 'c', z: 'y', b : 'f' }, { sort: alphabeticalSort }), 'a=c&b=f&z=y');Finally, you can use the filter option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification:
function filterFunc(prefix, value) {
if (prefix == 'b') {
// Return an `undefined` value to omit a property.
return;
}
if (prefix == 'e[f]') {
return value.getTime();
}
if (prefix == 'e[g][0]') {
return value * 2;
}
return value;
}
qs.stringify({ a: 'b', c: 'd', e: { f: new Date(123), g: [2] } }, { filter: filterFunc });
// 'a=b&c=d&e[f]=123&e[g][0]=4'
qs.stringify({ a: 'b', c: 'd', e: 'f' }, { filter: ['a', 'e'] });
// 'a=b&e=f'
qs.stringify({ a: ['b', 'c', 'd'], e: 'f' }, { filter: ['a', 0, 2] });
// 'a[0]=b&a[2]=d'null valuesBy default, null values are treated like empty strings:
Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.
To distinguish between null values and empty strings use the strictNullHandling flag. In the result string the null values have no = sign:
var strictNull = qs.stringify({ a: null, b: '' }, { strictNullHandling: true });
assert.equal(strictNull, 'a&b=');To parse values without = back to null use the strictNullHandling flag:
var parsedStrictNull = qs.parse('a&b=', { strictNullHandling: true });
assert.deepEqual(parsedStrictNull, { a: null, b: '' });To completely skip rendering keys with null values, use the skipNulls flag:
var nullsSkipped = qs.stringify({ a: 'b', c: null}, { skipNulls: true });
assert.equal(nullsSkipped, 'a=b');By default the encoding and decoding of characters is done in utf-8. If you wish to encode querystrings to a different character set (i.e. Shift JIS) you can use the qs-iconv library:
var encoder = require('qs-iconv/encoder')('shift_jis');
var shiftJISEncoded = qs.stringify({ a: 'こんにちは!' }, { encoder: encoder });
assert.equal(shiftJISEncoded, 'a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I');This also works for decoding of query strings:
var decoder = require('qs-iconv/decoder')('shift_jis');
var obj = qs.parse('a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I', { decoder: decoder });
assert.deepEqual(obj, { a: 'こんにちは!' });RFC3986 used as default option and encodes ’ ’ to %20 which is backward compatible. In the same time, output can be stringified as per RFC1738 with ’ ’ equal to ‘+’.
assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');
Match files using the patterns the shell uses, like stars and stuff.
This is a glob implementation in JavaScript. It uses the minimatch library to do its matching.

Install with npm
npm i glob
var glob = require("glob")
// options is optional
glob("**/*.js", options, function (er, files) {
// files is an array of filenames.
// If the `nonull` option is set, and nothing
// was found, then files is ["**/*.js"]
// er is an error object or null.
})“Globs” are the patterns you type when you do stuff like ls *.js on the command line, or put build/* in a .gitignore file.
Before parsing the path part patterns, braced sections are expanded into a set. Braced sections start with { and end with }, with any number of comma-delimited sections within. Braced sections may contain slash characters, so a{/b/c,bcd} would expand into a/b/c and abcd.
The following characters have special magic meaning when used in a path portion:
* Matches 0 or more characters in a single path portion? Matches 1 character[...] Matches a range of characters, similar to a RegExp range. If the first character of the range is ! or ^ then it matches any character not in the range.!(pattern|pattern|pattern) Matches anything that does not match any of the patterns provided.?(pattern|pattern|pattern) Matches zero or one occurrence of the patterns provided.+(pattern|pattern|pattern) Matches one or more occurrences of the patterns provided.*(a|b|c) Matches zero or more occurrences of the patterns provided@(pattern|pat*|pat?erN) Matches exactly one of the patterns provided** If a “globstar” is alone in a path portion, then it matches zero or more directories and subdirectories searching for matches. It does not crawl symlinked directories.If a file or directory path portion has a . as the first character, then it will not match any glob pattern unless that pattern’s corresponding path part also has a . as its first character.
For example, the pattern a/.*/c would match the file at a/.b/c. However the pattern a/*/c would not, because * does not start with a dot character.
You can make glob treat dots as normal characters by setting dot:true in the options.
If you set matchBase:true in the options, and the pattern has no slashes in it, then it will seek for any file anywhere in the tree with a matching basename. For example, *.js would match test/simple/basic.js.
If no matching files are found, then an empty array is returned. This differs from the shell, where the pattern itself is returned. For example:
$ echo asdf asdf
To get the bash-style behavior, set the nonull:true in the options.
man shman bash (Search for “Pattern Matching”)man 3 fnmatchman 5 gitignoreReturns true if there are any special characters in the pattern, and false otherwise.
Note that the options affect the results. If noext:true is set in the options object, then +(a|b) will not be considered a magic pattern. If the pattern has a brace expansion, like a/{b/c,x/y} then that is considered magical, unless nobrace:true is set in the options.
pattern {String} Pattern to be matchedoptions {Object}cb {Function}
err {Error | null}matches {Array<String>} filenames found matching the patternPerform an asynchronous glob search.
pattern {String} Pattern to be matchedoptions {Object}{Array<String>} filenames found matching the patternPerform a synchronous glob search.
Create a Glob object by instantiating the glob.Glob class.
It’s an EventEmitter, and starts walking the filesystem to find matches immediately.
pattern {String} pattern to search foroptions {Object}cb {Function} Called when an error occurs, or matches are found
err {Error | null}matches {Array<String>} filenames found matching the patternNote that if the sync flag is set in the options, then matches will be immediately available on the g.found member.
minimatch The minimatch object that the glob uses.options The options object passed in.aborted Boolean which is set to true when calling abort(). There is no way at this time to continue a glob search after aborting, but you can re-use the statCache to avoid having to duplicate syscalls.cache Convenience object. Each field has the following possible values:
false - Path does not existtrue - Path exists'FILE' - Path exists, and is not a directory'DIR' - Path exists, and is a directory[file, entries, ...] - Path exists, is a directory, and the array value is the results of fs.readdirstatCache Cache of fs.stat results, to prevent statting the same path multiple times.symlinks A record of which paths are symbolic links, which is relevant in resolving ** patterns.realpathCache An optional object which is passed to fs.realpath to minimize unnecessary syscalls. It is stored on the instantiated Glob object, and may be re-used.end When the matching is finished, this is emitted with all the matches found. If the nonull option is set, and no match was found, then the matches list contains the original pattern. The matches are sorted, unless the nosort flag is set.match Every time a match is found, this is emitted with the specific thing that matched. It is not deduplicated or resolved to a realpath.error Emitted when an unexpected error is encountered, or whenever any fs error occurs if options.strict is set.abort When abort() is called, this event is raised.pause Temporarily stop the searchresume Resume the searchabort Stop the search foreverAll the options that can be passed to Minimatch can also be passed to Glob to change pattern matching behavior. Also, some have been added, or have glob-specific ramifications.
All options are false by default, unless otherwise noted.
All options are added to the Glob object, as well.
If you are running many glob operations, you can pass a Glob object as the options argument to a subsequent operation to shortcut some stat and readdir calls. At the very least, you may pass in shared symlinks, statCache, realpathCache, and cache options, so that parallel glob operations will be sped up by sharing information about the filesystem.
cwd The current working directory in which to search. Defaults to process.cwd().root The place where patterns starting with / will be mounted onto. Defaults to path.resolve(options.cwd, "/") (/ on Unix systems, and C:\ or some such on Windows.)dot Include .dot files in normal matches and globstar matches. Note that an explicit dot in a portion of the pattern will always match dot files.nomount By default, a pattern starting with a forward-slash will be “mounted” onto the root setting, so that a valid filesystem path is returned. Set this flag to disable that behavior.mark Add a / character to directory matches. Note that this requires additional stat calls.nosort Don’t sort the results.stat Set to true to stat all results. This reduces performance somewhat, and is completely unnecessary, unless readdir is presumed to be an untrustworthy indicator of file existence.silent When an unusual error is encountered when attempting to read a directory, a warning will be printed to stderr. Set the silent option to true to suppress these warnings.strict When an unusual error is encountered when attempting to read a directory, the process will just continue on in search of other matches. Set the strict option to raise an error in these cases.cache See cache property above. Pass in a previously generated cache object to save some fs calls.statCache A cache of results of filesystem information, to prevent unnecessary stat calls. While it should not normally be necessary to set this, you may pass the statCache from one glob() call to the options object of another, if you know that the filesystem will not change between calls. (See “Race Conditions” below.)symlinks A cache of known symbolic links. You may pass in a previously generated symlinks object to save lstat calls when resolving ** matches.sync DEPRECATED: use glob.sync(pattern, opts) instead.nounique In some cases, brace-expanded patterns can result in the same file showing up multiple times in the result set. By default, this implementation prevents duplicates in the result set. Set this flag to disable that behavior.nonull Set to never return an empty set, instead returning a set containing the pattern itself. This is the default in glob(3).debug Set to enable debug logging in minimatch and glob.nobrace Do not expand {a,b} and {1..3} brace sets.noglobstar Do not match ** against multiple filenames. (Ie, treat it as a normal * instead.)noext Do not match +(a|b) “extglob” patterns.nocase Perform a case-insensitive match. Note: on case-insensitive filesystems, non-magic patterns will match by default, since stat and readdir will not raise errors.matchBase Perform a basename-only match if the pattern does not contain any slash characters. That is, *.js would be treated as equivalent to **/*.js, matching all js files in all directories.nodir Do not match directories, only files. (Note: to match only directories, simply put a / at the end of the pattern.)ignore Add a pattern or an array of glob patterns to exclude matches. Note: ignore patterns are always in dot:true mode, regardless of any other settings.follow Follow symlinked directories when expanding ** patterns. Note that this can result in a lot of duplicate references in the presence of cyclic links.realpath Set to true to call fs.realpath on all of the results. In the case of a symlink that cannot be resolved, the full absolute path to the matched entry is returned (though it will usually be a broken symlink)absolute Set to true to always receive absolute paths for matched files. Unlike realpath, this also affects the values returned in the match event.While strict compliance with the existing standards is a worthwhile goal, some discrepancies exist between node-glob and other implementations, and are intentional.
The double-star character ** is supported by default, unless the noglobstar flag is set. This is supported in the manner of bsdglob and bash 4.3, where ** only has special significance if it is the only thing in a path part. That is, a/**/b will match a/x/y/b, but a/**b will not.
If an escaped pattern has no matches, and the nonull flag is set, then glob returns the pattern as-provided, rather than interpreting the character escapes. For example, glob.match([], "\\*a\\?") will return "\\*a\\?" rather than "*a?". This is akin to setting the nullglob option in bash, except that it does not resolve escaped pattern characters.
If brace expansion is not disabled, then it is performed before any other interpretation of the glob pattern. Thus, a pattern like +(a|{b),c)}, which would not be valid in bash or zsh, is expanded first into the set of +(a|b) and +(a|c), and those patterns are checked for validity. Since those two are valid, matching proceeds.
Previously, this module let you mark a pattern as a “comment” if it started with a # character, or a “negated” pattern if it started with a ! character.
These options were deprecated in version 5, and removed in version 6.
To specify things that should not match, use the ignore option.
Please only use forward-slashes in glob expressions.
Though windows uses either / or \ as its path separator, only / characters are used by this glob implementation. You must use forward-slashes only in glob expressions. Back-slashes will always be interpreted as escape characters, not path separators.
Results from absolute patterns such as /foo/* are mounted onto the root setting using path.join. On windows, this will by default result in /foo/* matching C:\foo\bar.txt.
Glob searching, by its very nature, is susceptible to race conditions, since it relies on directory walking and such.
As a result, it is possible that a file that exists when glob looks for it may have been deleted or modified by the time it returns the result.
As part of its internal implementation, this program caches all stat and readdir calls that it makes, in order to cut down on system overhead. However, this also makes it even more susceptible to races, especially if the cache or statCache objects are reused between glob calls.
Users are thus advised not to use a glob result as a guarantee of filesystem state in the face of rapid changes. For the vast majority of operations, this is never a problem.
Glob’s logo was created by Tanya Brassie. Logo files can be found here.
Any change to behavior (including bugfixes) must come with a test.
Patches that fail tests or reduce performance will be rejected.
# to run tests
npm test
# to re-generate test fixtures
npm run test-regen
# to benchmark against bash/zsh
npm run bench
# to profile javascript
npm run prof

Optionator is a JavaScript/Node.js option parsing and help generation library used by eslint, Grasp, LiveScript, esmangle, escodegen, and many more.
For an online demo, check out the Grasp online demo.
About · Usage · Settings Format · Argument Format
The problem with other option parsers, such as yargs or minimist, is they just accept all input, valid or not. With Optionator, if you mistype an option, it will give you an error (with a suggestion for what you meant). If you give the wrong type of argument for an option, it will give you an error rather than supplying the wrong input to your application.
$ cmd –halp Invalid option ‘–halp’ - perhaps you meant ‘–help’?
$ cmd –count str Invalid value for option ‘count’ - expected type Int, received value: str.
Other helpful features include reformatting the help text based on the size of the console, so that it fits even if the console is narrow, and accepting not just an array (eg. process.argv), but a string or object as well, making things like testing much easier.
Optionator uses type-check and levn behind the scenes to cast and verify input according the specified types.
npm install optionator
For updates on Optionator, follow me on twitter.
Optionator is a Node.js module, but can be used in the browser as well if packed with webpack/browserify.
require('optionator'); returns a function. It has one property, VERSION, the current version of the library as a string. This function is called with an object specifying your options and other information, see the settings format section. This in turn returns an object with three properties, parse, parseArgv, generateHelp, and generateHelpForOption, which are all functions.
var optionator = require('optionator')({
prepend: 'Usage: cmd [options]',
append: 'Version 1.0.0',
options: [{
option: 'help',
alias: 'h',
type: 'Boolean',
description: 'displays help'
}, {
option: 'count',
alias: 'c',
type: 'Int',
description: 'number of things',
example: 'cmd --count 2'
}]
});
var options = optionator.parseArgv(process.argv);
if (options.help) {
console.log(optionator.generateHelp());
}
...parse processes the input according to your settings, and returns an object with the results.
[String] | Object | String - the input you wish to parse{slice: Int} - all options optional
slice specifies how much to slice away from the beginning if the input is an array or string - by default 0 for string, 2 for array (works with process.argv)Object - the parsed options, each key is a camelCase version of the option name (specified in dash-case), and each value is the processed value for that option. Positional values are in an array under the _ key.
parse(['node', 't.js', '--count', '2', 'positional']); // {count: 2, _: ['positional']}
parse('--count 2 positional'); // {count: 2, _: ['positional']}
parse({count: 2, _:['positional']}); // {count: 2, _: ['positional']}parseArgv works exactly like parse, but only for array input and it slices off the first two elements.
[String] - the input you wish to parseSee “returns” section in “parse”
generateHelp produces help text based on your settings.
{showHidden: Boolean, interpolate: Object} - all options optional
showHidden specifies whether to show options with hidden: true specified, by default it is falseinterpolate specify data to be interpolated in prepend and append text, {{key}} is the format - eg. generateHelp({interpolate:{version: '0.4.2'}}), will change this append text: Version {{version}} to Version 0.4.2String - the generated help text
generateHelp(); /*
"Usage: cmd [options] positional
-h, --help displays help
-c, --count Int number of things
Version 1.0.0
"*/generateHelpForOption produces expanded help text for the specified with optionName option. If an example was specified for the option, it will be displayed, and if a longDescription was specified, it will display that instead of the description.
String - the name of the option to displayString - the generated help text for the option
generateHelpForOption('count'); /*
"-c, --count Int
description: number of things
example: cmd --count 2
"*/When your require('optionator'), you get a function that takes in a settings object. This object has the type:
{ prepend: String, append: String, options: [{heading: String} | { option: String, alias: String | String, type: String, enum: String, default: String, restPositional: Boolean, required: Boolean, overrideRequired: Boolean, dependsOn: String | String, concatRepeatedArrays: Boolean | (Boolean, Object), mergeRepeatedObjects: Boolean, description: String, longDescription: String, example: String | String }], helpStyle: { aliasSeparator: String, typeSeparator: String, descriptionSeparator: String, initialIndent: Int, secondaryIndent: Int, maxPadFactor: Number }, mutuallyExclusive: [[String | String]], concatRepeatedArrays: Boolean | (Boolean, Object), // deprecated, set in defaults object mergeRepeatedObjects: Boolean, // deprecated, set in defaults object positionalAnywhere: Boolean, typeAliases: Object, defaults: Object }
All of the properties are optional (the Maybe has been excluded for brevities sake), except for having either heading: String or option: String in each object in the options array.
prepend is an optional string to be placed before the options in the help textappend is an optional string to be placed after the options in the help textoptions is a required array specifying your options and headings, the options and headings will be displayed in the order specifiedhelpStyle is an optional object which enables you to change the default appearance of some aspects of the help textmutuallyExclusive is an optional array of arrays of either strings or arrays of strings. The top level array is a list of rules, each rule is a list of elements - each element can be either a string (the name of an option), or a list of strings (a group of option names) - there will be an error if more than one element is presentconcatRepeatedArrays see description under the “Option Properties” heading - use at the top level is deprecated, if you want to set this for all options, use the defaults propertymergeRepeatedObjects see description under the “Option Properties” heading - use at the top level is deprecated, if you want to set this for all options, use the defaults propertypositionalAnywhere is an optional boolean (defaults to true) - when true it allows positional arguments anywhere, when false, all arguments after the first positional one are taken to be positional as well, even if they look like a flag. For example, with positionalAnywhere: false, the arguments --flag --boom 12 --crack would have two positional arguments: 12 and --cracktypeAliases is an optional object, it allows you to set aliases for types, eg. {Path: 'String'} would allow you to use the type Path as an alias for the type Stringdefaults is an optional object following the option properties format, which specifies default values for all options. A default will be overridden if manually set. For example, you can do default: { type: "String" } to set the default type of all options to String, and then override that default in an individual option by setting the type propertyheading a required string, the name of the headingoption the required name of the option - use dash-case, without the leading dashesalias is an optional string or array of strings which specify any aliases for the optiontype is a required string in the type check format, this will be used to cast the inputted value and validate itenum is an optional array of strings, each string will be parsed by levn - the argument value must be one of the resulting values - each potential value must validate against the specified typedefault is a optional string, which will be parsed by levn and used as the default value if none is set - the value must validate against the specified typerestPositional is an optional boolean - if set to true, everything after the option will be taken to be a positional argument, even if it looks like a named argumentrequired is an optional boolean - if set to true, the option parsing will fail if the option is not definedoverrideRequired is a optional boolean - if set to true and the option is used, and there is another option which is required but not set, it will override the need for the required option and there will be no error - this is useful if you have required options and want to use --help or --version flagsconcatRepeatedArrays is an optional boolean or tuple with boolean and options object (defaults to false) - when set to true and an option contains an array value and is repeated, the subsequent values for the flag will be appended rather than overwriting the original value - eg. option g of type [String]: -g a -g b -g c,d will result in ['a','b','c','d']You can supply an options object by giving the following value: [true, options]. The one currently supported option is oneValuePerFlag, this only allows one array value per flag. This is useful if your potential values contain a comma. * mergeRepeatedObjects is an optional boolean (defaults to false) - when set to true and an option contains an object value and is repeated, the subsequent values for the flag will be merged rather than overwriting the original value - eg. option g of type Object: -g a:1 -g b:2 -g c:3,d:4 will result in {a: 1, b: 2, c: 3, d: 4} * dependsOn is an optional string or array of strings - if simply a string (the name of another option), it will make sure that that other option is set, if an array of strings, depending on whether 'and' or 'or' is first, it will either check whether all (['and', 'option-a', 'option-b']), or at least one (['or', 'option-a', 'option-b']) other options are set * description is an optional string, which will be displayed next to the option in the help text * longDescription is an optional string, it will be displayed instead of the description when generateHelpForOption is used * example is an optional string or array of strings with example(s) for the option - these will be displayed when generateHelpForOption is used
aliasSeparator is an optional string, separates multiple names from each other - default: ’ ,’typeSeparator is an optional string, separates the type from the names - default: ’ ’descriptionSeparator is an optional string , separates the description from the padded name and type - default: ’ ’initialIndent is an optional int - the amount of indent for options - default: 2secondaryIndent is an optional int - the amount of indent if wrapped fully (in addition to the initial indent) - default: 4maxPadFactor is an optional number - affects the default level of padding for the names/type, it is multiplied by the average of the length of the names/type - default: 1.5At the highest level there are two types of arguments: named, and positional.
Name arguments of any length are prefixed with -- (eg. --go), and those of one character may be prefixed with either -- or - (eg. -g).
There are two types of named arguments: boolean flags (eg. --problemo, -p) which take no value and result in a true if they are present, the falsey undefined if they are not present, or false if present and explicitly prefixed with no (eg. --no-problemo). Named arguments with values (eg. --tseries 800, -t 800) are the other type. If the option has a type Boolean it will automatically be made into a boolean flag. Any other type results in a named argument that takes a value.
For more information about how to properly set types to get the value you want, take a look at the type check and levn pages.
You can group single character arguments that use a single -, however all except the last must be boolean flags (which take no value). The last may be a boolean flag, or an argument which takes a value - eg. -ba 2 is equivalent to -b -a 2.
Positional arguments are all those values which do not fall under the above - they can be anywhere, not just at the end. For example, in cmd -b one -a 2 two where b is a boolean flag, and a has the type Number, there are two positional arguments, one and two.
Everything after an -- is positional, even if it looks like a named argument.
You may optionally use = to separate option names from values, for example: --count=2.
If you specify the option NUM, then any argument using a single - followed by a number will be valid and will set the value of NUM. Eg. -2 will be parsed into NUM: 2.
If duplicate named arguments are present, the last one will be taken.
optionator is written in LiveScript - a language that compiles to JavaScript. It uses levn to cast arguments to their specified type, and uses type-check to validate values. It also uses the prelude.ls library.
Install
Use
const postgres = require('postgres')
const sql = postgres({ ...options }) // will default to the same as psql
await sql`
select name, age from users
`
// > [{ name: 'Murray', age: 68 }, { name: 'Walter', age 78 }]postgres([url], [options])You can use either a postgres:// url connection string or the options to define your database connection properties. Options in the object will override any present in the url.
const sql = postgres('postgres://username:password@host:port/database', {
host : '', // Postgres ip address or domain name
port : 5432, // Postgres server port
path : '', // unix socket path (usually '/tmp')
database : '', // Name of database to connect to
username : '', // Username of database user
password : '', // Password of database user
ssl : false, // True, or options for tls.connect
max : 10, // Max number of connections
timeout : 0, // Idle connection timeout in seconds
types : [], // Array of custom types, see more below
onnotice : fn // Defaults to console.log
onparameter : fn // (key, value) when server param change
debug : fn // Is called with (connection, query, parameters)
transform : {
column : fn, // Transforms incoming column names
value : fn, // Transforms incoming row values
row : fn // Transforms entire rows
},
connection : {
application_name : 'postgres.js', // Default application_name
... // Other connection parameters
}
})More info for the ssl option can be found in the Node.js docs for tls connect options
sql` ` -> PromiseA query will always return a Promise which resolves to a results array [...]{ rows, command }. Destructuring is great to immediately access the first element.
const [new_user] = await sql`
insert into users (
name, age
) values (
'Murray', 68
)
returning *
`
// new_user = { user_id: 1, name: 'Murray', age: 68 }Parameters are automatically inferred and handled by Postgres so that SQL injection isn’t possible. No special handling is necessary, simply use JS tagged template literals as usual.
let search = 'Mur'
const users = await sql`
select
name,
age
from users
where
name like ${ search + '%' }
`
// users = [{ name: 'Murray', age: 68 }]sql` `.stream(fn) -> PromiseIf you want to handle rows returned by a query one by one, you can use .stream which returns a promise that resolves once there are no more rows.
await sql`
select created_at, name from events
`.stream(row => {
// row = { created_at: '2019-11-22T14:22:00Z', name: 'connected' }
})
// No more rowsWhen you call listen, a dedicated connection will automatically be made to ensure that you receive notifications in real time. This connection will be used for any further calls to listen. Listen returns a promise which resolves once the LISTEN query to Postgres completes, or if there is already a listener active.
await sql.listen('news', payload => {
const json = JSON.parse(payload)
console.log(json.this) // logs 'is'
})Notify can be done as usual in sql, or by using the sql.notify method.
sql``Tagged template functions are not just ordinary template literal strings. They allow the function to handle any parameters within before interpolation. This means that they can be used to enforce a safe way of writing queries, which is what Postgres.js does. Any generic value will be serialized according to an inferred type, and replaced by a PostgreSQL protocol placeholders $1, $2, ... and then sent to the database as a parameter to let it handle any need for escaping / casting.
This also means you cannot write dynamic queryes or concat queries together by simple string manipulation. To enable dynamic queries in a safe way, the sql function doubles as a regular function which escapes any value properly. It also includes overloads for common cases of inserting, selecting, updating and querying.
sql() inside tagged templatePostgres.js has a safe, ergonomic way to aid you in writing queries. This makes it easier to write dynamic inserts, selects, updates and where queries.
const user = {
name: 'Murray',
age: 68
}
sql`
insert into users ${
sql(user, 'name', 'age')
}
`
// Is translated into this query:
insert into users (name, age) values ($1, $2)You can leave out the column names and simply do sql(user) if you want to get all fields from the object as columns, but be careful not to allow users to supply columns you don’t want.
If you need to insert multiple rows at the same time it’s also much faster to do it with a single insert. Simply pass an array of objects to sql().
const users = [{
name: 'Murray',
age: 68,
garbage: 'ignore'
}, {
name: 'Walter',
age: 78
}]
sql`
insert into users ${
sql(users, 'name', 'age')
}
`This is also useful for update queries
const user = {
id: 1,
name: 'Muray'
}
sql`
update users set ${
sql(user, 'name')
} where
id = ${ user.id }
`
// Is translated into this query:
update users set name = $1 where id = $2
const columns = ['name', 'age']
sql`
select ${
sql(columns)
} from users
`
// Is translated into this query:
select name, age from userssql.array(Array)PostgreSQL has a native array type which is similar to js arrays, but only allows the same type and shape for nested items. This method automatically infers the item type and serializes js arrays into PostgreSQL arrays.
const types = sql`
insert into types (
integers,
strings,
dates,
buffers,
multi
) values (
${ sql.array([1,2,3,4,5]) },
${ sql.array(['Hello', 'Postgres']) },
${ sql.array([new Date(), new Date(), new Date()]) },
${ sql.array([Buffer.from('Hello'), Buffer.from('Postgres')]) },
${ sql.array([[[1,2],[3,4]][[5,6],[7,8]]]) },
)
`sql.json(object)
const body = { hello: 'postgres' }
const [{ json }] = await sql`
insert into json (
body
) values (
${ sql.json(body) }
)
returning body
`
// json = { hello: 'postgres' }sql.file(path, [args], [options]) -> PromiseUsing an .sql file for a query. The contents will be cached in memory so that the file is only read once.
sql.file(path.join(__dirname, 'query.sql'), [], {
cache: true // Default true - disable for single shot queries or memory reasons
})
const [user, account] = await sql.begin(async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Alice'
)
`
const [account] = await sql`
insert into accounts (
user_id
) values (
${ user.user_id }
)
`
return [user, account]
})sql.savepoint([name], fn) -> Promise
sql.begin(async sql => {
const [user] = await sql`
insert into users (
name
) values (
'Alice'
)
`
const [account] = (await sql.savepoint(sql =>
sql`
insert into accounts (
user_id
) values (
${ user.user_id }
)
`
).catch(err => {
// Account could not be created. ROLLBACK SAVEPOINT is called because we caught the rejection.
})) || []
return [user, account]
})
.then(([user, account])) => {
})
.catch(() => {
// not so good - ROLLBACK was called
})Do note that you can often achieve the same result using WITH queries (Common Table Expressions) instead of using transactions.
You can add ergonomic support for custom types, or simply pass an object with a { type, value } signature that contains the Postgres oid for the type and the correctly serialized value.
Adding Query helpers is the recommended approach which can be done like this:
const sql = sql({
types: {
rect: {
to : 1337,
from : [1337],
serialize : ({ x, y, width, height }) => [x, y, width, height],
parse : ([x, y, width, height]) => { x, y, width, height }
}
}
})
const [custom] = sql`
insert into rectangles (
name,
rect
) values (
'wat',
${ sql.types.rect({ x: 13, y: 37: width: 42, height: 80 }) }
)
returning *
`
// custom = { name: 'wat', rect: { x: 13, y: 37: width: 42, height: 80 } }To ensure proper teardown and cleanup on server restarts use sql.end({ timeout: null }) before process.exit()
Calling sql.end() will reject new queries and return a Promise which resolves when all queries are finished and the underlying connections are closed. If a timeout is provided any pending queries will be rejected once the timeout is reached and the connections will be destroyed.
import prexit from 'prexit'
prexit(async () => {
await sql.end({ timeout: 5 })
await new Promise(r => server.close(r))
})Connections are created lazily once a query is created. This means that simply doing const sql = postgres(...) won’t have any effect other than instantiating a new sql instance.
No connection will be made until a query is made.
This means that we get a much simpler story for error handling and reconnections. Queries will be sent over the wire immediately on the next available connection in the pool. Connections are automatically taken out of the pool if you start a transaction using sql.begin(), and automatically returned to the pool once your transaction is done.
Any query which was already sent over the wire will be rejected if the connection is lost. It’ll automatically defer to the error handling you have for that query, and since connections are lazy it’ll automatically try to reconnect the next time a query is made. The benefit of this is no weird generic “onerror” handler that tries to get things back to normal, and also simpler application code since you don’t have to handle errors out of context.
There are no guarantees about queries executing in order unless using a transaction with sql.begin() or setting max: 1. Of course doing a series of queries, one awaiting the other will work as expected, but that’s just due to the nature of js async/promise handling, so it’s not necessary for this library to be concerned with ordering.
sql.unsafe - Advanced unsafe use cases
sql.unsafe(query, [args], [options]) -> promiseIf you know what you’re doing, you can use unsafe to pass any string you’d like to postgres. Please note that this can lead to sql injection if you’re not careful.
Errors are all thrown to related queries and never globally. Errors coming from PostgreSQL itself are always in the native Postgres format, and the same goes for any Node.js errors eg. coming from the underlying connection.
There are also the following errors specifically for this library.
X (X) is not supported
Whenever a message is received from Postgres which is not supported by this library. Feel free to file an issue if you think something is missing.
Max number of parameters (65534) exceeded
The postgres protocol doesn’t allow more than 65534 (16bit) parameters. If you run into this issue there are various workarounds such as using sql([...]) to escape values instead of passing them as parameters.
Message type X not supported
When using SASL authentication the server responds with a signature at the end of the authentication flow which needs to match the one on the client. This is to avoid man in the middle attacks. If you receive this error the connection was cancelled because the server did not reply with the expected signature.
Query not called as a tagged template literal
Making queries has to be done using the sql function as a tagged template. This is to ensure parameters are serialized and passed to Postgres as query parameters with correct types and to avoid SQL injection.
Auth type X not implemented
Postgres supports many different authentication types. This one is not supported.
write CONNECTION_CLOSED host:port
This error is thrown if the connection was closed without an error. This should not happen during normal operation, so please create an issue if this was unexpected.
write CONNECTION_ENDED host:port
This error is thrown if the user has called sql.end() and performed a query afterwards.
write CONNECTION_DESTROYED host:port
This error is thrown for any queries that were pending when the timeout to sql.end({ timeout: X }) was reached.
A really big thank you to [@JAForbes](https://twitter.com/jmsfbs) who introduced me to Postgres and still holds my hand navigating all the great opportunities we have.
Thanks to [@ACXgit](https://twitter.com/andreacoiutti) for initial tests and dogfooding.
Also thanks to Ryan Dahl for letting me have the postgres npm package name.
fs.readdir():warning: This is «fork» for original
readdir-enhancedpackage but with some monkey fixes.
readdir-enhanced is a backward-compatible drop-in replacement for fs.readdir() and fs.readdirSync() with tons of extra features (filtering, recursion, absolute paths, stats, and more) as well as additional APIs for Promises, Streams, and EventEmitters.
readdir-enhanced has multiple APIs, so you can pick whichever one you prefer. There are three main APIs:
Synchronous API
aliases: readdir.sync, readdir.readdirSync
Blocks the thread until all directory contents are read, and then returns all the results.
Streaming API
aliases: readdir.stream, readdir.readdirStream
The streaming API reads the starting directory asynchronously and returns the results in real-time as they are read. The results can be piped to other Node.js streams, or you can listen for specific events via the EventEmitter interface. (see example below)
var readdir = require('readdir-enhanced');
var through2 = require('through2');
// Synchronous API
var files = readdir.sync('my/directory');
// Callback API
readdir.async('my/directory', function(err, files) { ... });
// Promises API
readdir.async('my/directory')
.then(function(files) { ... })
.catch(function(err) { ... });
// EventEmitter API
readdir.stream('my/directory')
.on('data', function(path) { ... })
.on('file', function(path) { ... })
.on('directory', function(path) { ... })
.on('symlink', function(path) { ... })
.on('error', function(err) { ... });
// Streaming API
var stream = readdir.stream('my/directory')
.pipe(through2.obj(function(data, enc, next) {
console.log(data);
this.push(data);
next();
}); Enhanced Features —————– readdir-enhanced adds several features to the built-in fs.readdir() function. All of the enhanced features are opt-in, which makes readdir-enhanced fully backward compatible by default. You can enable any of the features by passing-in an options argument as the second parameter.
### Recursion By default, readdir-enhanced will only return the top-level contents of the starting directory. But you can set the deep option to recursively traverse the subdirectories and return their contents as well.
The deep option can be set to true to traverse the entire directory structure.
var readdir = require('readdir-enhanced');
readdir('my/directory', {deep: true}, function(err, files) {
console.log(files);
// => subdir1
// => subdir1/file.txt
// => subdir1/subdir2
// => subdir1/subdir2/file.txt
// => subdir1/subdir2/subdir3
// => subdir1/subdir2/subdir3/file.txt
});The deep option can be set to a number to only traverse that many levels deep. For example, calling readdir('my/directory', {deep: 2}) will return subdir1/file.txt and subdir1/subdir2/file.txt, but it won’t return subdir1/subdir2/subdir3/file.txt.
var readdir = require('readdir-enhanced');
readdir('my/directory', {deep: 2}, function(err, files) {
console.log(files);
// => subdir1
// => subdir1/file.txt
// => subdir1/subdir2
// => subdir1/subdir2/file.txt
// => subdir1/subdir2/subdir3
});For simple use-cases, you can use a regular expression or a glob pattern to crawl only the directories whose path matches the pattern. The path is relative to the starting directory by default, but you can customize this via options.basePath.
NOTE: Glob patterns always use forward-slashes, even on Windows. This does not apply to regular expressions though. Regular expressions should use the appropraite path separator for the environment. Or, you can match both types of separators using
[\\/].
var readdir = require('readdir-enhanced');
// Only crawl the "lib" and "bin" subdirectories
// (notice that the "node_modules" subdirectory does NOT get crawled)
readdir('my/directory', {deep: /lib|bin/}, function(err, files) {
console.log(files);
// => bin
// => bin/cli.js
// => lib
// => lib/index.js
// => node_modules
// => package.json
});For more advanced recursion, you can set the deep option to a function that accepts an fs.Stats object and returns a truthy value if the starting directory should be crawled.
NOTE: The
fs.Statsobject that’s passed to the function has additionalpathanddepthproperties. Thepathis relative to the starting directory by default, but you can customize this viaoptions.basePath. Thedepthis the number of subdirectories beneath the base path (seeoptions.deep).
var readdir = require('readdir-enhanced');
// Crawl all subdirectories, except "node_modules"
function ignoreNodeModules (stats) {
return stats.path.indexOf('node_modules') === -1;
}
readdir('my/directory', {deep: ignoreNodeModules}, function(err, files) {
console.log(files);
// => bin
// => bin/cli.js
// => lib
// => lib/index.js
// => node_modules
// => package.json
}); ### Filtering The filter option lets you limit the results based on any criteria you want.
For simple use-cases, you can use a regular expression or a glob pattern to filter items by their path. The path is relative to the starting directory by default, but you can customize this via options.basePath.
NOTE: Glob patterns always use forward-slashes, even on Windows. This does not apply to regular expressions though. Regular expressions should use the appropraite path separator for the environment. Or, you can match both types of separators using
[\\/].
var readdir = require('readdir-enhanced');
// Find all .txt files
readdir('my/directory', {filter: '*.txt'});
// Find all package.json files
readdir('my/directory', {filter: '**/package.json', deep: true});
// Find everything with at least one number in the name
readdir('my/directory', {filter: /\d+/});For more advanced filtering, you can specify a filter function that accepts an fs.Stats object and returns a truthy value if the item should be included in the results.
NOTE: The
fs.Statsobject that’s passed to the filter function has additionalpathanddepthproperties. Thepathis relative to the starting directory by default, but you can customize this viaoptions.basePath. Thedepthis the number of subdirectories beneath the base path (seeoptions.deep).
var readdir = require('readdir-enhanced');
// Only return file names containing an underscore
function myFilter(stats) {
return stats.isFile() && stats.path.indexOf('_') >= 0;
}
readdir('my/directory', {filter: myFilter}, function(err, files) {
console.log(files);
// => __myFile.txt
// => my_other_file.txt
// => img_1.jpg
// => node_modules
}); ### Base Path By default all readdir-enhanced functions return paths that are relative to the starting directory. But you can use the basePath option to customize this. The basePath will be prepended to all of the returned paths. One common use-case for this is to set basePath to the absolute path of the starting directory, so that all of the returned paths will be absolute.
var readdir = require('readdir-enhanced');
var path = require('path');
// Get absolute paths
var absPath = path.resolve('my/dir');
readdir('my/directory', {basePath: absPath}, function(err, files) {
console.log(files);
// => /absolute/path/to/my/directory/file1.txt
// => /absolute/path/to/my/directory/file2.txt
// => /absolute/path/to/my/directory/subdir
});
// Get paths relative to the working directory
readdir('my/directory', {basePath: 'my/directory'}, function(err, files) {
console.log(files);
// => my/directory/file1.txt
// => my/directory/file2.txt
// => my/directory/subdir
}); ### Path Separator By default, readdir-enhanced uses the correct path separator for your OS (\ on Windows, / on Linux & MacOS). But you can set the sep option to any separator character(s) that you want to use instead. This is usually used to ensure consistent path separators across different OSes.
var readdir = require('readdir-enhanced');
// Always use Windows path separators
readdir('my/directory', {sep: '\\', deep: true}, function(err, files) {
console.log(files);
// => subdir1
// => subdir1\file.txt
// => subdir1\subdir2
// => subdir1\subdir2\file.txt
// => subdir1\subdir2\subdir3
// => subdir1\subdir2\subdir3\file.txt
}); ### Custom FS methods By default, readdir-enhanced uses the default Node.js FileSystem module for methods like fs.stat, fs.readdir and fs.lstat. But in some situations, you can want to use your own FS methods (FTP, SSH, remote drive and etc). So you can provide your own implementation of FS methods by setting options.fs or specific methods, such as options.fs.stat.
var readdir = require('readdir-enhanced');
function myCustomReaddirMethod(dir, callback) {
callback(null, ['__myFile.txt']);
}
var options = {
fs: {
readdir: myCustomReaddirMethod
}
};
readdir('my/directory', options, function(err, files) {
console.log(files);
// => __myFile.txt
}); Get fs.Stats objects instead of strings ———————— All of the readdir-enhanced functions listed above return an array of strings (paths). But in some situations, the path isn’t enough information. So, readdir-enhanced provides alternative versions of each function, which return an array of fs.Stats objects instead of strings. The fs.Stats object contains all sorts of useful information, such as the size, the creation date/time, and helper methods such as isFile(), isDirectory(), isSymbolicLink(), etc.
NOTE: The
fs.Statsobjects that are returned also have additionalpathanddepthproperties. Thepathis relative to the starting directory by default, but you can customize this viaoptions.basePath. Thedepthis the number of subdirectories beneath the base path (seeoptions.deep).
To get fs.Stats objects instead of strings, just add the word “Stat” to the function name. As with the normal functions, each one is aliased (e.g. readdir.async.stat is the same as readdir.readdirAsyncStat), so you can use whichever naming style you prefer.
var readdir = require('readdir-enhanced');
// Synchronous API
var stats = readdir.sync.stat('my/directory');
var stats = readdir.readdirSyncStat('my/directory');
// Async API
readdir.async.stat('my/directory', function(err, stats) { ... });
readdir.readdirAsyncStat('my/directory', function(err, stats) { ... });
// Streaming API
readdir.stream.stat('my/directory')
.on('data', function(stat) { ... })
.on('file', function(stat) { ... })
.on('directory', function(stat) { ... })
.on('symlink', function(stat) { ... });
readdir.readdirStreamStat('my/directory')
.on('data', function(stat) { ... })
.on('file', function(stat) { ... })
.on('directory', function(stat) { ... })
.on('symlink', function(stat) { ... }); Backward Compatible ——————– readdir-enhanced is fully backward-compatible with Node.js’ built-in fs.readdir() and fs.readdirSync() functions, so you can use it as a drop-in replacement in existing projects without affecting existing functionality, while still being able to use the enhanced features as needed.
var readdir = require('readdir-enhanced');
var readdirSync = readdir.sync;
// Use it just like Node's built-in fs.readdir function
readdir('my/directory', function(err, files) { ... });
// Use it just like Node's built-in fs.readdirSync function
var files = readdirSync('my/directory');I welcome any contributions, enhancements, and bug-fixes. File an issue on GitHub and submit a pull request.
To build the project locally on your computer:
Clone this repo
git clone https://github.com/bigstickcarpet/readdir-enhanced.git
Install dependencies
npm install
Run the tests
npm test
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.01.2.3 - 2 := >=1.2.3 <3.0.01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0 (Matching major version)1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.01.2 := 1.2.x := >=1.2.0 <1.3.0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero digit in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0^0.2.3 := >=0.2.3 <0.3.0^0.0.3 := >=0.0.3 <0.0.4^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0^0.0.x := >=0.0.0 <0.1.0^0.0 := >=0.0.0 <0.1.0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0^0.x := >=0.0.0 <1.0.0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectNote that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.01.2.3 - 2 := >=1.2.3 <3.0.01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0 (Matching major version)1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.01.2 := 1.2.x := >=1.2.0 <1.3.0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero digit in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0^0.2.3 := >=0.2.3 <0.3.0^0.0.3 := >=0.0.3 <0.0.4^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0^0.0.x := >=0.0.0 <0.1.0^0.0 := >=0.0.0 <0.1.0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0^0.x := >=0.0.0 <1.0.0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectNote that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
With browserify, simply require('buffer') or use the Buffer global and you will get this module.
The goal is to provide an API that is 100% identical to node’s Buffer API. Read the official docs for the full list of properties, instance methods, and class methods that are supported.
Uint8Array/ArrayBuffer, not Object)buf[4] notation works, even in old browsers like IE6!windowTo use this module directly (without browserify), install it:
This module was previously called native-buffer-browserify, but please use buffer from now on.
A standalone bundle is available here, for non-browserify users.
The module’s API is identical to node’s Buffer API. Read the official docs for the full list of properties, instance methods, and class methods that are supported.
As mentioned above, require('buffer') or use the Buffer global with browserify and this module will automatically be included in your bundle. Almost any npm module will work in the browser, even if it assumes that the node Buffer API will be available.
To depend on this module explicitly (without browserify), require it like this:
To require this module explicitly, use require('buffer/') which tells the node.js module lookup algorithm (also used by browserify) to use the npm module named buffer instead of the node.js core module named buffer!
The Buffer constructor returns instances of Uint8Array that have their prototype changed to Buffer.prototype. Furthermore, Buffer is a subclass of Uint8Array, so the returned instances will have all the node Buffer methods and the Uint8Array methods. Square bracket notation works as expected – it returns a single octet.
The Uint8Array prototype remains unmodified.
buf.slice() does not modify parent buffer’s memoryIf you only support modern browsers (specifically, those with typed array support), then this issue does not affect you. If you support super old browsers, then read on.
In node, the slice() method returns a new Buffer that shares underlying memory with the original Buffer. When you modify one buffer, you modify the other. Read more.
In browsers with typed array support, this Buffer implementation supports this behavior. In browsers without typed arrays, an alternate buffer implementation is used that is based on Object which has no mechanism to point separate Buffers to the same underlying slab of memory.
You can see which browser versions lack typed array support here.
This module tracks the Buffer API in the latest (unstable) version of node.js. The Buffer API is considered stable in the node stability index, so it is unlikely that there will ever be breaking changes. Nonetheless, when/if the Buffer API changes in node, this module’s API will change accordingly.
buffer-equals - Node.js 0.12 buffer.equals() ponyfillbuffer-reverse - A lite module for reverse-operations on buffersbuffer-xor - A simple module for bitwise-xor on buffersis-buffer - Determine if an object is a Buffer without including the whole Buffer packagetypedarray-to-buffer - Convert a typed array to a Buffer without a copySee perf tests in /perf.
BrowserBuffer is the browser buffer module (this repo). Uint8Array is included as a sanity check (since BrowserBuffer uses Uint8Array under the hood, Uint8Array will always be at least a bit faster). Finally, NodeBuffer is the node.js buffer module, which is included to compare against.
NOTE: Performance has improved since these benchmarks were taken. PR welcoem to update the README.
| Method | Operations | Accuracy | Sampled | Fastest |
|---|---|---|---|---|
| BrowserBuffer#bracket-notation | 11,457,464 ops/sec | ±0.86% | 66 | ✓ |
| Uint8Array#bracket-notation | 10,824,332 ops/sec | ±0.74% | 65 | |
| BrowserBuffer#concat | 450,532 ops/sec | ±0.76% | 68 | |
| Uint8Array#concat | 1,368,911 ops/sec | ±1.50% | 62 | ✓ |
| BrowserBuffer#copy(16000) | 903,001 ops/sec | ±0.96% | 67 | |
| Uint8Array#copy(16000) | 1,422,441 ops/sec | ±1.04% | 66 | ✓ |
| BrowserBuffer#copy(16) | 11,431,358 ops/sec | ±0.46% | 69 | |
| Uint8Array#copy(16) | 13,944,163 ops/sec | ±1.12% | 68 | ✓ |
| BrowserBuffer#new(16000) | 106,329 ops/sec | ±6.70% | 44 | |
| Uint8Array#new(16000) | 131,001 ops/sec | ±2.85% | 31 | ✓ |
| BrowserBuffer#new(16) | 1,554,491 ops/sec | ±1.60% | 65 | |
| Uint8Array#new(16) | 6,623,930 ops/sec | ±1.66% | 65 | ✓ |
| BrowserBuffer#readDoubleBE | 112,830 ops/sec | ±0.51% | 69 | ✓ |
| DataView#getFloat64 | 93,500 ops/sec | ±0.57% | 68 | |
| BrowserBuffer#readFloatBE | 146,678 ops/sec | ±0.95% | 68 | ✓ |
| DataView#getFloat32 | 99,311 ops/sec | ±0.41% | 67 | |
| BrowserBuffer#readUInt32LE | 843,214 ops/sec | ±0.70% | 69 | ✓ |
| DataView#getUint32 | 103,024 ops/sec | ±0.64% | 67 | |
| BrowserBuffer#slice | 1,013,941 ops/sec | ±0.75% | 67 | |
| Uint8Array#subarray | 1,903,928 ops/sec | ±0.53% | 67 | ✓ |
| BrowserBuffer#writeFloatBE | 61,387 ops/sec | ±0.90% | 67 | |
| DataView#setFloat32 | 141,249 ops/sec | ±0.40% | 66 | ✓ |
| Method | Operations | Accuracy | Sampled | Fastest |
|---|---|---|---|---|
| BrowserBuffer#bracket-notation | 20,800,421 ops/sec | ±1.84% | 60 | |
| Uint8Array#bracket-notation | 20,826,235 ops/sec | ±2.02% | 61 | ✓ |
| BrowserBuffer#concat | 153,076 ops/sec | ±2.32% | 61 | |
| Uint8Array#concat | 1,255,674 ops/sec | ±8.65% | 52 | ✓ |
| BrowserBuffer#copy(16000) | 1,105,312 ops/sec | ±1.16% | 63 | |
| Uint8Array#copy(16000) | 1,615,911 ops/sec | ±0.55% | 66 | ✓ |
| BrowserBuffer#copy(16) | 16,357,599 ops/sec | ±0.73% | 68 | |
| Uint8Array#copy(16) | 31,436,281 ops/sec | ±1.05% | 68 | ✓ |
| BrowserBuffer#new(16000) | 52,995 ops/sec | ±6.01% | 35 | |
| Uint8Array#new(16000) | 87,686 ops/sec | ±5.68% | 45 | ✓ |
| BrowserBuffer#new(16) | 252,031 ops/sec | ±1.61% | 66 | |
| Uint8Array#new(16) | 8,477,026 ops/sec | ±0.49% | 68 | ✓ |
| BrowserBuffer#readDoubleBE | 99,871 ops/sec | ±0.41% | 69 | |
| DataView#getFloat64 | 285,663 ops/sec | ±0.70% | 68 | ✓ |
| BrowserBuffer#readFloatBE | 115,540 ops/sec | ±0.42% | 69 | |
| DataView#getFloat32 | 288,722 ops/sec | ±0.82% | 68 | ✓ |
| BrowserBuffer#readUInt32LE | 633,926 ops/sec | ±1.08% | 67 | ✓ |
| DataView#getUint32 | 294,808 ops/sec | ±0.79% | 64 | |
| BrowserBuffer#slice | 349,425 ops/sec | ±0.46% | 69 | |
| Uint8Array#subarray | 5,965,819 ops/sec | ±0.60% | 65 | ✓ |
| BrowserBuffer#writeFloatBE | 59,980 ops/sec | ±0.41% | 67 | |
| DataView#setFloat32 | 317,634 ops/sec | ±0.63% | 68 | ✓ |
| Method | Operations | Accuracy | Sampled | Fastest |
|---|---|---|---|---|
| BrowserBuffer#bracket-notation | 10,279,729 ops/sec | ±2.25% | 56 | ✓ |
| Uint8Array#bracket-notation | 10,030,767 ops/sec | ±2.23% | 59 | |
| BrowserBuffer#concat | 144,138 ops/sec | ±1.38% | 65 | |
| Uint8Array#concat | 4,950,764 ops/sec | ±1.70% | 63 | ✓ |
| BrowserBuffer#copy(16000) | 1,058,548 ops/sec | ±1.51% | 64 | |
| Uint8Array#copy(16000) | 1,409,666 ops/sec | ±1.17% | 65 | ✓ |
| BrowserBuffer#copy(16) | 6,282,529 ops/sec | ±1.88% | 58 | |
| Uint8Array#copy(16) | 11,907,128 ops/sec | ±2.87% | 58 | ✓ |
| BrowserBuffer#new(16000) | 101,663 ops/sec | ±3.89% | 57 | |
| Uint8Array#new(16000) | 22,050,818 ops/sec | ±6.51% | 46 | ✓ |
| BrowserBuffer#new(16) | 176,072 ops/sec | ±2.13% | 64 | |
| Uint8Array#new(16) | 24,385,731 ops/sec | ±5.01% | 51 | ✓ |
| BrowserBuffer#readDoubleBE | 41,341 ops/sec | ±1.06% | 67 | |
| DataView#getFloat64 | 322,280 ops/sec | ±0.84% | 68 | ✓ |
| BrowserBuffer#readFloatBE | 46,141 ops/sec | ±1.06% | 65 | |
| DataView#getFloat32 | 337,025 ops/sec | ±0.43% | 69 | ✓ |
| BrowserBuffer#readUInt32LE | 151,551 ops/sec | ±1.02% | 66 | |
| DataView#getUint32 | 308,278 ops/sec | ±0.94% | 67 | ✓ |
| BrowserBuffer#slice | 197,365 ops/sec | ±0.95% | 66 | |
| Uint8Array#subarray | 9,558,024 ops/sec | ±3.08% | 58 | ✓ |
| BrowserBuffer#writeFloatBE | 17,518 ops/sec | ±1.03% | 63 | |
| DataView#setFloat32 | 319,751 ops/sec | ±0.48% | 68 | ✓ |
| Method | Operations | Accuracy | Sampled | Fastest |
|---|---|---|---|---|
| BrowserBuffer#bracket-notation | 10,489,828 ops/sec | ±3.25% | 90 | |
| Uint8Array#bracket-notation | 10,534,884 ops/sec | ±0.81% | 92 | ✓ |
| NodeBuffer#bracket-notation | 10,389,910 ops/sec | ±0.97% | 87 | |
| BrowserBuffer#concat | 487,830 ops/sec | ±2.58% | 88 | |
| Uint8Array#concat | 1,814,327 ops/sec | ±1.28% | 88 | ✓ |
| NodeBuffer#concat | 1,636,523 ops/sec | ±1.88% | 73 | |
| BrowserBuffer#copy(16000) | 1,073,665 ops/sec | ±0.77% | 90 | |
| Uint8Array#copy(16000) | 1,348,517 ops/sec | ±0.84% | 89 | ✓ |
| NodeBuffer#copy(16000) | 1,289,533 ops/sec | ±0.82% | 93 | |
| BrowserBuffer#copy(16) | 12,782,706 ops/sec | ±0.74% | 85 | |
| Uint8Array#copy(16) | 14,180,427 ops/sec | ±0.93% | 92 | ✓ |
| NodeBuffer#copy(16) | 11,083,134 ops/sec | ±1.06% | 89 | |
| BrowserBuffer#new(16000) | 141,678 ops/sec | ±3.30% | 67 | |
| Uint8Array#new(16000) | 161,491 ops/sec | ±2.96% | 60 | |
| NodeBuffer#new(16000) | 292,699 ops/sec | ±3.20% | 55 | ✓ |
| BrowserBuffer#new(16) | 1,655,466 ops/sec | ±2.41% | 82 | |
| Uint8Array#new(16) | 14,399,926 ops/sec | ±0.91% | 94 | ✓ |
| NodeBuffer#new(16) | 3,894,696 ops/sec | ±0.88% | 92 | |
| BrowserBuffer#readDoubleBE | 109,582 ops/sec | ±0.75% | 93 | ✓ |
| DataView#getFloat64 | 91,235 ops/sec | ±0.81% | 90 | |
| NodeBuffer#readDoubleBE | 88,593 ops/sec | ±0.96% | 81 | |
| BrowserBuffer#readFloatBE | 139,854 ops/sec | ±1.03% | 85 | ✓ |
| DataView#getFloat32 | 98,744 ops/sec | ±0.80% | 89 | |
| NodeBuffer#readFloatBE | 92,769 ops/sec | ±0.94% | 93 | |
| BrowserBuffer#readUInt32LE | 710,861 ops/sec | ±0.82% | 92 | |
| DataView#getUint32 | 117,893 ops/sec | ±0.84% | 91 | |
| NodeBuffer#readUInt32LE | 851,412 ops/sec | ±0.72% | 93 | ✓ |
| BrowserBuffer#slice | 1,673,877 ops/sec | ±0.73% | 94 | |
| Uint8Array#subarray | 6,919,243 ops/sec | ±0.67% | 90 | ✓ |
| NodeBuffer#slice | 4,617,604 ops/sec | ±0.79% | 93 | |
| BrowserBuffer#writeFloatBE | 66,011 ops/sec | ±0.75% | 93 | |
| DataView#setFloat32 | 127,760 ops/sec | ±0.72% | 93 | ✓ |
| NodeBuffer#writeFloatBE | 103,352 ops/sec | ±0.83% | 93 |
| Method | Operations | Accuracy | Sampled | Fastest |
|---|---|---|---|---|
| BrowserBuffer#bracket-notation | 10,990,488 ops/sec | ±1.11% | 91 | |
| Uint8Array#bracket-notation | 11,268,757 ops/sec | ±0.65% | 97 | |
| NodeBuffer#bracket-notation | 11,353,260 ops/sec | ±0.83% | 94 | ✓ |
| BrowserBuffer#concat | 378,954 ops/sec | ±0.74% | 94 | |
| Uint8Array#concat | 1,358,288 ops/sec | ±0.97% | 87 | |
| NodeBuffer#concat | 1,934,050 ops/sec | ±1.11% | 78 | ✓ |
| BrowserBuffer#copy(16000) | 894,538 ops/sec | ±0.56% | 84 | |
| Uint8Array#copy(16000) | 1,442,656 ops/sec | ±0.71% | 96 | |
| NodeBuffer#copy(16000) | 1,457,898 ops/sec | ±0.53% | 92 | ✓ |
| BrowserBuffer#copy(16) | 12,870,457 ops/sec | ±0.67% | 95 | |
| Uint8Array#copy(16) | 16,643,989 ops/sec | ±0.61% | 93 | ✓ |
| NodeBuffer#copy(16) | 14,885,848 ops/sec | ±0.74% | 94 | |
| BrowserBuffer#new(16000) | 109,264 ops/sec | ±4.21% | 63 | |
| Uint8Array#new(16000) | 138,916 ops/sec | ±1.87% | 61 | |
| NodeBuffer#new(16000) | 281,449 ops/sec | ±3.58% | 51 | ✓ |
| BrowserBuffer#new(16) | 1,362,935 ops/sec | ±0.56% | 99 | |
| Uint8Array#new(16) | 6,193,090 ops/sec | ±0.64% | 95 | ✓ |
| NodeBuffer#new(16) | 4,745,425 ops/sec | ±1.56% | 90 | |
| BrowserBuffer#readDoubleBE | 118,127 ops/sec | ±0.59% | 93 | ✓ |
| DataView#getFloat64 | 107,332 ops/sec | ±0.65% | 91 | |
| NodeBuffer#readDoubleBE | 116,274 ops/sec | ±0.94% | 95 | |
| BrowserBuffer#readFloatBE | 150,326 ops/sec | ±0.58% | 95 | ✓ |
| DataView#getFloat32 | 110,541 ops/sec | ±0.57% | 98 | |
| NodeBuffer#readFloatBE | 121,599 ops/sec | ±0.60% | 87 | |
| BrowserBuffer#readUInt32LE | 814,147 ops/sec | ±0.62% | 93 | |
| DataView#getUint32 | 137,592 ops/sec | ±0.64% | 90 | |
| NodeBuffer#readUInt32LE | 931,650 ops/sec | ±0.71% | 96 | ✓ |
| BrowserBuffer#slice | 878,590 ops/sec | ±0.68% | 93 | |
| Uint8Array#subarray | 2,843,308 ops/sec | ±1.02% | 90 | |
| NodeBuffer#slice | 4,998,316 ops/sec | ±0.68% | 90 | ✓ |
| BrowserBuffer#writeFloatBE | 65,927 ops/sec | ±0.74% | 93 | |
| DataView#setFloat32 | 139,823 ops/sec | ±0.97% | 89 | ✓ |
| NodeBuffer#writeFloatBE | 135,763 ops/sec | ±0.65% | 96 | |
First, install the project:
npm install
Then, to run tests in Node.js, run:
npm run test-node
To test locally in a browser, you can run:
npm run test-browser-local
This will print out a URL that you can then open in a browser to run the tests, using Zuul.
To run automated browser tests using Saucelabs, ensure that your SAUCE_USERNAME and SAUCE_ACCESS_KEY environment variables are set, then run:
npm test
This is what’s run in Travis, to check against various browsers. The list of browsers is kept in the .zuul.yml file.
This module uses JavaScript Standard Style.
To test that the code conforms to the style, npm install and run:
./node_modules/.bin/standard
This was originally forked from buffer-browserify.
base is the foundation for creating modular, unit testable and highly pluggable node.js applications, starting with a handful of common methods, like
set,get,delanduse.
Install with npm:
Base is a framework for rapidly creating high quality node.js applications, using plugins like building blocks.
The core team follows these principles to help guide API decisions:
Base or base pluginsThe API was designed to provide only the minimum necessary functionality for creating a useful application, with or without plugins.
Base core
Base itself ships with only a handful of useful methods, such as:
.set: for setting values on the instance.get: for getting values from the instance.has: to check if a property exists on the instance.define: for setting non-enumerable values on the instance.use: for adding pluginsBe generic
When deciding on method to add or remove, we try to answer these questions:
Plugin system
It couldn’t be easier to extend Base with any features or custom functionality you can think of.
Base plugins are just functions that take an instance of Base:
var base = new Base();
function plugin(base) {
// do plugin stuff, in pure JavaScript
}
// use the plugin
base.use(plugin);Inheritance
Easily inherit Base using .extend:
var Base = require('base');
function MyApp() {
Base.call(this);
}
Base.extend(MyApp);
var app = new MyApp();
app.set('a', 'b');
app.get('a');
//=> 'b';Inherit or instantiate with a namespace
By default, the .get, .set and .has methods set and get values from the root of the base instance. You can customize this using the .namespace method exposed on the exported function. For example:
var Base = require('base');
// get and set values on the `base.cache` object
var base = Base.namespace('cache');
var app = base();
app.set('foo', 'bar');
console.log(app.cache.foo);
//=> 'bar'Usage
var Base = require('base');
var app = new Base();
app.set('foo', 'bar');
console.log(app.foo);
//=> 'bar'Create an instance of Base with the given config and options.
Params
config {Object}: If supplied, this object is passed to cache-base to merge onto the the instance upon instantiation.options {Object}: If supplied, this object is used to initialize the base.options object.Example
// initialize with `config` and `options`
var app = new Base({isApp: true}, {abc: true});
app.set('foo', 'bar');
// values defined with the given `config` object will be on the root of the instance
console.log(app.baz); //=> undefined
console.log(app.foo); //=> 'bar'
// or use `.get`
console.log(app.get('isApp')); //=> true
console.log(app.get('foo')); //=> 'bar'
// values defined with the given `options` object will be on `app.options
console.log(app.options.abc); //=> trueSet the given name on app._name and app.is* properties. Used for doing lookups in plugins.
Params
name {String}returns {Boolean}Example
app.is('foo');
console.log(app._name);
//=> 'foo'
console.log(app.isFoo);
//=> true
app.is('bar');
console.log(app.isFoo);
//=> true
console.log(app.isBar);
//=> true
console.log(app._name);
//=> 'bar'Returns true if a plugin has already been registered on an instance.
Plugin implementors are encouraged to use this first thing in a plugin to prevent the plugin from being called more than once on the same instance.
Params
name {String}: The plugin name.register {Boolean}: If the plugin if not already registered, to record it as being registered pass true as the second argument.returns {Boolean}: Returns true if a plugin is already registered.Events
emits: plugin Emits the name of the plugin being registered. Useful for unit tests, to ensure plugins are only registered once.Example
var base = new Base();
base.use(function(app) {
if (app.isRegistered('myPlugin')) return;
// do stuff to `app`
});
// to also record the plugin as being registered
base.use(function(app) {
if (app.isRegistered('myPlugin', true)) return;
// do stuff to `app`
});Define a plugin function to be called immediately upon init. Plugins are chainable and expose the following arguments to the plugin function:
app: the current instance of Basebase: the first ancestor instance of BaseParams
fn {Function}: plugin function to callreturns {Object}: Returns the item instance for chaining.Example
The .define method is used for adding non-enumerable property on the instance. Dot-notation is not supported with define.
Params
key {String}: The name of the property to define.value {any}returns {Object}: Returns the instance for chaining.Example
// arbitrary `render` function using lodash `template`
app.define('render', function(str, locals) {
return _.template(str)(locals);
});Mix property key onto the Base prototype. If base is inherited using Base.extend this method will be overridden by a new mixin method that will only add properties to the prototype of the inheriting application.
Params
key {String}val {Object|Array}returns {Object}: Returns the base instance for chaining.Example
Getter/setter used when creating nested instances of Base, for storing a reference to the first ancestor instance. This works by setting an instance of Base on the parent property of a “child” instance. The base property defaults to the current instance if no parent property is defined.
Example
// create an instance of `Base`, this is our first ("base") instance
var first = new Base();
first.foo = 'bar'; // arbitrary property, to make it easier to see what's happening later
// create another instance
var second = new Base();
// create a reference to the first instance (`first`)
second.parent = first;
// create another instance
var third = new Base();
// create a reference to the previous instance (`second`)
// repeat this pattern every time a "child" instance is created
third.parent = second;
// we can always access the first instance using the `base` property
console.log(first.base.foo);
//=> 'bar'
console.log(second.base.foo);
//=> 'bar'
console.log(third.base.foo);
//=> 'bar'
// and now you know how to get to third base ;)Static method for adding global plugin functions that will be added to an instance when created.
Params
fn {Function}: Plugin function to use on each instance.returns {Object}: Returns the Base constructor for chainingExample
Base.use(function(app) {
app.foo = 'bar';
});
var app = new Base();
console.log(app.foo);
//=> 'bar'Static method for inheriting the prototype and static methods of the Base class. This method greatly simplifies the process of creating inheritance-based applications. See static-extend for more details.
Params
Ctor {Function}: constructor to extendmethods {Object}: Optional prototype properties to mix in.returns {Object}: Returns the Base constructor for chainingExample
var extend = cu.extend(Parent);
Parent.extend(Child);
// optional methods
Parent.extend(Child, {
foo: function() {},
bar: function() {}
});Used for adding methods to the Base prototype, and/or to the prototype of child instances. When a mixin function returns a function, the returned function is pushed onto the .mixins array, making it available to be used on inheriting classes whenever Base.mixins() is called (e.g. Base.mixins(Child)).
Params
fn {Function}: Function to callreturns {Object}: Returns the Base constructor for chainingExample
Static method for running global mixin functions against a child constructor. Mixins must be registered before calling this method.
Params
Child {Function}: Constructor function of a child classreturns {Object}: Returns the Base constructor for chainingExample
Similar to util.inherit, but copies all static properties, prototype properties, and getters/setters from Provider to Receiver. See class-utils for more details.
Params
Receiver {Function}: Receiving (child) constructorProvider {Function}: Providing (parent) constructorreturns {Object}: Returns the Base constructor for chainingExample
The following node.js applications were built with Base:
Statements : 98.91% ( 91/92 )
Branches : 92.86% ( 26/28 )
Functions : 100% ( 17/17 )
Lines : 98.9% ( 90/91 )
Breaking changes
.use and .run methods are now non-enumerableBreaking changes
.is no longer takes a function, a string must be passed.debug code has been removedapp._namespace was removed (related to debug).plugin, .use, and .define no longer emit events.assertPlugin was removed.lazy was removeddata method to base-methods. | homepagebase application. | homepageoption, enable and disable. See the readme… more | homepagepkg method that exposes pkg-store to your base application. | homepagePull requests and stars are always welcome. For bugs and feature requests, please create an issue.
| Commits | Contributor |
|---|---|
| 141 | jonschlinkert |
| 30 | doowb |
| 3 | charlike |
| 1 | criticalmash |
| 1 | wtgtybhertgeghgtwtg |
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on September 07, 2017.
Website | Configuring | Rules | Contributing | Reporting Bugs | Code of Conduct | Twitter | Mailing List | Chat Room
ESLint is a tool for identifying and reporting on patterns found in ECMAScript/JavaScript code. In many ways, it is similar to JSLint and JSHint with a few exceptions:
Prerequisites: Node.js (^10.12.0, or >=12.0.0) built with SSL support. (If you are using an official Node.js distribution, SSL is always built in.)
You can install ESLint using npm:
npm install eslint --save-dev
You should then set up a configuration file:
$ ./node_modules/.bin/eslint --init
After that, you can run ESLint on any file or directory like this:
$ ./node_modules/.bin/eslint yourfile.js
After running eslint --init, you’ll have a .eslintrc file in your directory. In it, you’ll see some rules configured like this:
The names "semi" and "quotes" are the names of rules in ESLint. The first value is the error level of the rule and can be one of these values:
"off" or 0 - turn the rule off"warn" or 1 - turn the rule on as a warning (doesn’t affect exit code)"error" or 2 - turn the rule on as an error (exit code will be 1)The three error levels allow you fine-grained control over how ESLint applies rules (for more configuration options and details, see the configuration docs).
ESLint adheres to the JS Foundation Code of Conduct.
Before filing an issue, please be sure to read the guidelines for what you’re reporting:
Yes. JSCS has reached end of life and is no longer supported.
We have prepared a migration guide to help you convert your JSCS settings to an ESLint configuration.
We are now at or near 100% compatibility with JSCS. If you try ESLint and believe we are not yet compatible with a JSCS rule/configuration, please create an issue (mentioning that it is a JSCS compatibility issue) and we will evaluate it as per our normal process.
No, ESLint does both traditional linting (looking for problematic patterns) and style checking (enforcement of conventions). You can use ESLint for everything, or you can combine both using Prettier to format your code and ESLint to catch possible errors.
package.json as devDependencies (or dependencies, if your project uses ESLint at runtime).npm install and all your dependencies are installed.npm view eslint-plugin-myplugin peerDependencies to see what peer dependencies eslint-plugin-myplugin has.Yes, ESLint natively supports parsing JSX syntax (this must be enabled in configuration). Please note that supporting JSX syntax is not the same as supporting React. React applies specific semantics to JSX syntax that ESLint doesn’t recognize. We recommend using eslint-plugin-react if you are using React and want React semantics.
ESLint has full support for ECMAScript 3, 5 (default), 2015, 2016, 2017, 2018, 2019, and 2020. You can set your desired ECMAScript syntax (and other settings, like global variables or your target environments) through configuration.
ESLint’s parser only officially supports the latest final ECMAScript standard. We will make changes to core rules in order to avoid crashes on stage 3 ECMAScript syntax proposals (as long as they are implemented using the correct experimental ESTree syntax). We may make changes to core rules to better work with language extensions (such as JSX, Flow, and TypeScript) on a case-by-case basis.
In other cases (including if rules need to warn on more or fewer cases due to new syntax, rather than just not crashing), we recommend you use other parsers and/or rule plugins. If you are using Babel, you can use the babel-eslint parser and eslint-plugin-babel to use any option available in Babel.
Join our Mailing List or Chatroom.
We have scheduled releases every two weeks on Friday or Saturday. You can follow a release issue for updates about the scheduling of any particular release.
ESLint takes security seriously. We work hard to ensure that ESLint is safe for everyone and that security issues are addressed quickly and responsibly. Read the full security policy.
ESLint follows semantic versioning. However, due to the nature of ESLint as a code quality tool, it’s not always clear when a minor or major version bump occurs. To help clarify this for everyone, we’ve defined the following semantic versioning policy for ESLint:
eslint:recommended is updated and will result in strictly fewer linting errors (e.g., rule removals).eslint:recommended is updated and may result in new linting errors (e.g., rule additions, most rule option updates).According to our policy, any minor update may report more linting errors than the previous release (ex: from a bug fix). As such, we recommend using the tilde (~) in package.json e.g. "eslint": "~3.1.0" to guarantee the results of your builds.
These folks keep the project moving and are resources for help.
The people who manage releases, review feature requests, and meet regularly to ensure ESLint is properly maintained.
![]() Nicholas C. Zakas |
![]() Brandon Mills |
![]() Toru Nagashima |
![]() Milos Djermanovic |
The people who review and implement new features.
![]() 薛定谔的猫 |
The people who review and fix bugs and help triage issues.
![]() Pig Fang |
![]() Anix |
![]() YeonJuan |
The following companies, organizations, and individuals support ESLint’s ongoing maintenance and development. Become a Sponsor to get your logo on our README and website.
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
--rtl
Coerce version strings right to left
--ltr
Coerce version strings left to right (default)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.01.2.3 - 2 := >=1.2.3 <3.0.01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0 (Matching major version)1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.01.2 := 1.2.x := >=1.2.0 <1.3.0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0^0.2.3 := >=0.2.3 <0.3.0^0.0.3 := >=0.0.3 <0.0.4^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0^0.0.x := >=0.0.0 <0.1.0^0.0 := >=0.0.0 <0.1.0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0^0.x := >=0.0.0 <1.0.0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectNote that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version, options): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.
clean(version): Clean a string to be a valid semver if possibleThis will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.
ex. * s.clean(= v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
--rtl
Coerce version strings right to left
--ltr
Coerce version strings left to right (default)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.01.2.3 - 2 := >=1.2.3 <3.0.01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0 (Matching major version)1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.01.2 := 1.2.x := >=1.2.0 <1.3.0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0^0.2.3 := >=0.2.3 <0.3.0^0.0.3 := >=0.0.3 <0.0.4^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0^0.0.x := >=0.0.0 <0.1.0^0.0 := >=0.0.0 <0.1.0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0^0.x := >=0.0.0 <1.0.0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectNote that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version, options): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.
clean(version): Clean a string to be a valid semver if possibleThis will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.
ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
--rtl
Coerce version strings right to left
--ltr
Coerce version strings left to right (default)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.01.2.3 - 2 := >=1.2.3 <3.0.01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0 (Matching major version)1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.01.2 := 1.2.x := >=1.2.0 <1.3.0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0^0.2.3 := >=0.2.3 <0.3.0^0.0.3 := >=0.0.3 <0.0.4^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0^0.0.x := >=0.0.0 <0.1.0^0.0 := >=0.0.0 <0.1.0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0^0.x := >=0.0.0 <1.0.0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectNote that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version, options): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.
clean(version): Clean a string to be a valid semver if possibleThis will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.
ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
--rtl
Coerce version strings right to left
--ltr
Coerce version strings left to right (default)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.01.2.3 - 2 := >=1.2.3 <3.0.01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0 (Matching major version)1.2.x := >=1.2.0 <1.3.0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.01.2 := 1.2.x := >=1.2.0 <1.3.0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0^0.2.3 := >=0.2.3 <0.3.0^0.0.3 := >=0.0.3 <0.0.4^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0^0.0.x := >=0.0.0 <0.1.0^0.0 := >=0.0.0 <0.1.0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0^0.x := >=0.0.0 <1.0.0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectNote that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version, options): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Integer.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.
clean(version): Clean a string to be a valid semver if possibleThis will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.
ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null
Node.js body parsing middleware.
Parse incoming request bodies in a middleware before your handlers, available under the req.body property.
Note As req.body’s shape is based on user-controlled input, all properties and values in this object are untrusted and should be validated before trusting. For example, req.body.foo.toString() may fail in multiple ways, for example the foo property may not be there or may not be a string, and toString may not be a function and instead a string or other user input.
Learn about the anatomy of an HTTP transaction in Node.js.
This does not handle multipart bodies, due to their complex and typically large nature. For multipart bodies, you may be interested in the following modules:
This module provides the following parsers:
Other body parsers you might be interested in:
The bodyParser object exposes various factories to create middlewares. All middlewares will populate the req.body property with the parsed body when the Content-Type request header matches the type option, or an empty object ({}) if there was no body to parse, the Content-Type was not matched, or an error occurred.
The various errors returned by this module are described in the errors section.
Returns middleware that only parses json and only looks at requests where the Content-Type header matches the type option. This parser accepts any Unicode encoding of the body and supports automatic inflation of gzip and deflate encodings.
A new body object containing the parsed data is populated on the request object after the middleware (i.e. req.body).
The json function takes an optional options object that may contain any of the following keys:
When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.
Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.
The reviver option is passed directly to JSON.parse as the second argument. You can find more information on this argument in the MDN documentation about JSON.parse.
When set to true, will only accept arrays and objects; when false will accept anything JSON.parse accepts. Defaults to true.
The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like json), a mime type (like application/json), or a mime type with a wildcard (like */* or */json). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to application/json.
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.
Returns middleware that parses all bodies as a Buffer and only looks at requests where the Content-Type header matches the type option. This parser supports automatic inflation of gzip and deflate encodings.
A new body object containing the parsed data is populated on the request object after the middleware (i.e. req.body). This will be a Buffer object of the body.
The raw function takes an optional options object that may contain any of the following keys:
When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.
Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.
The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like bin), a mime type (like application/octet-stream), or a mime type with a wildcard (like */* or application/*). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to application/octet-stream.
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.
Returns middleware that parses all bodies as a string and only looks at requests where the Content-Type header matches the type option. This parser supports automatic inflation of gzip and deflate encodings.
A new body string containing the parsed data is populated on the request object after the middleware (i.e. req.body). This will be a string of the body.
The text function takes an optional options object that may contain any of the following keys:
Specify the default character set for the text content if the charset is not specified in the Content-Type header of the request. Defaults to utf-8.
When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.
Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.
The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like txt), a mime type (like text/plain), or a mime type with a wildcard (like */* or text/*). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to text/plain.
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.
Returns middleware that only parses urlencoded bodies and only looks at requests where the Content-Type header matches the type option. This parser accepts only UTF-8 encoding of the body and supports automatic inflation of gzip and deflate encodings.
A new body object containing the parsed data is populated on the request object after the middleware (i.e. req.body). This object will contain key-value pairs, where the value can be a string or array (when extended is false), or any type (when extended is true).
The urlencoded function takes an optional options object that may contain any of the following keys:
The extended option allows to choose between parsing the URL-encoded data with the querystring library (when false) or the qs library (when true). The “extended” syntax allows for rich objects and arrays to be encoded into the URL-encoded format, allowing for a JSON-like experience with URL-encoded. For more information, please see the qs library.
Defaults to true, but using the default has been deprecated. Please research into the difference between qs and querystring and choose the appropriate setting.
When set to true, then deflated (compressed) bodies will be inflated; when false, deflated bodies are rejected. Defaults to true.
Controls the maximum request body size. If this is a number, then the value specifies the number of bytes; if it is a string, the value is passed to the bytes library for parsing. Defaults to '100kb'.
The parameterLimit option controls the maximum number of parameters that are allowed in the URL-encoded data. If a request contains more parameters than this value, a 413 will be returned to the client. Defaults to 1000.
The type option is used to determine what media type the middleware will parse. This option can be a string, array of strings, or a function. If not a function, type option is passed directly to the type-is library and this can be an extension name (like urlencoded), a mime type (like application/x-www-form-urlencoded), or a mime type with a wildcard (like */x-www-form-urlencoded). If a function, the type option is called as fn(req) and the request is parsed if it returns a truthy value. Defaults to application/x-www-form-urlencoded.
The verify option, if supplied, is called as verify(req, res, buf, encoding), where buf is a Buffer of the raw request body and encoding is the encoding of the request. The parsing can be aborted by throwing an error.
The middlewares provided by this module create errors depending on the error condition during parsing. The errors will typically have a status/statusCode property that contains the suggested HTTP response code, an expose property to determine if the message property should be displayed to the client, a type property to determine the type of error without matching against the message, and a body property containing the read body, if available.
The following are the common errors emitted, though any error can come through for various reasons.
This error will occur when the request had a Content-Encoding header that contained an encoding but the “inflation” option was set to false. The status property is set to 415, the type property is set to 'encoding.unsupported', and the charset property will be set to the encoding that is unsupported.
This error will occur when the request is aborted by the client before reading the body has finished. The received property will be set to the number of bytes received before the request was aborted and the expected property is set to the number of expected bytes. The status property is set to 400 and type property is set to 'request.aborted'.
This error will occur when the request body’s size is larger than the “limit” option. The limit property will be set to the byte limit and the length property will be set to the request body’s length. The status property is set to 413 and the type property is set to 'entity.too.large'.
This error will occur when the request’s length did not match the length from the Content-Length header. This typically occurs when the request is malformed, typically when the Content-Length header was calculated based on characters instead of bytes. The status property is set to 400 and the type property is set to 'request.size.invalid'.
This error will occur when something called the req.setEncoding method prior to this middleware. This module operates directly on bytes only and you cannot call req.setEncoding when using this module. The status property is set to 500 and the type property is set to 'stream.encoding.set'.
This error will occur when the content of the request exceeds the configured parameterLimit for the urlencoded parser. The status property is set to 413 and the type property is set to 'parameters.too.many'.
This error will occur when the request had a charset parameter in the Content-Type header, but the iconv-lite module does not support it OR the parser does not support it. The charset is contained in the message as well as in the charset property. The status property is set to 415, the type property is set to 'charset.unsupported', and the charset property is set to the charset that is unsupported.
This error will occur when the request had a Content-Encoding header that contained an unsupported encoding. The encoding is contained in the message as well as in the encoding property. The status property is set to 415, the type property is set to 'encoding.unsupported', and the encoding property is set to the encoding that is unsupported.
This example demonstrates adding a generic JSON and URL-encoded parser as a top-level middleware, which will parse the bodies of all incoming requests. This is the simplest setup.
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// parse application/x-www-form-urlencoded
app.use(bodyParser.urlencoded({ extended: false }))
// parse application/json
app.use(bodyParser.json())
app.use(function (req, res) {
res.setHeader('Content-Type', 'text/plain')
res.write('you posted:\n')
res.end(JSON.stringify(req.body, null, 2))
})This example demonstrates adding body parsers specifically to the routes that need them. In general, this is the most recommended way to use body-parser with Express.
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// create application/json parser
var jsonParser = bodyParser.json()
// create application/x-www-form-urlencoded parser
var urlencodedParser = bodyParser.urlencoded({ extended: false })
// POST /login gets urlencoded bodies
app.post('/login', urlencodedParser, function (req, res) {
res.send('welcome, ' + req.body.username)
})
// POST /api/users gets JSON bodies
app.post('/api/users', jsonParser, function (req, res) {
// create user in req.body
})All the parsers accept a type option which allows you to change the Content-Type that the middleware will parse.
var express = require('express')
var bodyParser = require('body-parser')
var app = express()
// parse various different custom JSON types as JSON
app.use(bodyParser.json({ type: 'application/*+json' }))
// parse some custom thing into a Buffer
app.use(bodyParser.raw({ type: 'application/vnd.custom-type' }))
// parse an HTML body into a string
app.use(bodyParser.text({ type: 'text/html' }))A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A tiny node.js debugging utility modelled after node core’s debugging technique.
Discussion around the V3 API is under way here
debug exposes a function; simply pass this function the name of your module, and it will return a decorated version of console.error for you to pass debug statements to. This will allow you to toggle the debug output for different parts of your module as well as the module as a whole.
Example app.js:
var debug = require('debug')('http')
, http = require('http')
, name = 'My App';
// fake app
debug('booting %s', name);
http.createServer(function(req, res){
debug(req.method + ' ' + req.url);
res.end('hello\n');
}).listen(3000, function(){
debug('listening');
});
// fake worker of some kind
require('./worker');Example worker.js:
The DEBUG environment variable is then used to enable these based on space or comma-delimited names. Here are some examples:


On Windows the environment variable is set using the set command.
set DEBUG=*,-not_this
Note that PowerShell uses different syntax to set environment variables.
$env:DEBUG = "*,-not_this"
Then, run the program to be debugged as usual.
When actively developing an application it can be useful to see when the time spent between one debug() call and the next. Suppose for example you invoke debug() before requesting a resource, and after as well, the “+NNNms” will show you how much time was spent between calls.

When stdout is not a TTY, Date#toUTCString() is used, making it more useful for logging the debug information as shown below:

If you’re using this in one or more of your libraries, you should use the name of your library so that developers may toggle debugging as desired without guessing names. If you have more than one debuggers you should prefix them with your library name and use “:” to separate features. For example “bodyParser” from Connect would then be “connect:bodyParser”.
The * character may be used as a wildcard. Suppose for example your library has debuggers named “connect:bodyParser”, “connect:compress”, “connect:session”, instead of listing all three with DEBUG=connect:bodyParser,connect:compress,connect:session, you may simply do DEBUG=connect:*, or to run everything using this module simply use DEBUG=*.
You can also exclude specific debuggers by prefixing them with a “-” character. For example, DEBUG=*,-connect:* would include all debuggers except those starting with “connect:”.
When running through Node.js, you can set a few environment variables that will change the behavior of the debug logging:
| Name | Purpose |
|---|---|
DEBUG |
Enables/disables specific debugging namespaces. |
DEBUG_COLORS |
Whether or not to use colors in the debug output. |
DEBUG_DEPTH |
Object inspection depth. |
DEBUG_SHOW_HIDDEN |
Shows hidden properties on inspected objects. |
Note: The environment variables beginning with DEBUG_ end up being converted into an Options object that gets used with %o/%O formatters. See the Node.js documentation for util.inspect() for the complete list.
Debug uses printf-style formatting. Below are the officially supported formatters:
| Formatter | Representation |
|---|---|
%O |
Pretty-print an Object on multiple lines. |
%o |
Pretty-print an Object all on a single line. |
%s |
String. |
%d |
Number (both integer and float). |
%j |
JSON. Replaced with the string ‘[Circular]’ if the argument contains circular references. |
%% |
Single percent sign (‘%’). This does not consume an argument. |
You can add custom formatters by extending the debug.formatters object. For example, if you wanted to add support for rendering a Buffer as hex with %h, you could do something like:
const createDebug = require('debug')
createDebug.formatters.h = (v) => {
return v.toString('hex')
}
// …elsewhere
const debug = createDebug('foo')
debug('this is hex: %h', new Buffer('hello world'))
// foo this is hex: 68656c6c6f20776f726c6421 +0msYou can build a browser-ready script using browserify, or just use the browserify-as-a-service build, if you don’t want to build it yourself.
Debug’s enable state is currently persisted by localStorage. Consider the situation shown below where you have worker:a and worker:b, and wish to debug both. You can enable this using localStorage.debug:
And then refresh the page.
a = debug('worker:a');
b = debug('worker:b');
setInterval(function(){
a('doing some work');
}, 1000);
setInterval(function(){
b('doing some work');
}, 1200);Colors are also enabled on “Web Inspectors” that understand the %c formatting option. These are WebKit web inspectors, Firefox (since version 31) and the Firebug plugin for Firefox (any version).
Colored output looks something like:

By default debug will log to stderr, however this can be configured per-namespace by overriding the log method:
Example stdout.js:
var debug = require('debug');
var error = debug('app:error');
// by default stderr is used
error('goes to stderr!');
var log = debug('app:log');
// set this namespace to log via console.log
log.log = console.log.bind(console); // don't forget to bind to console!
log('goes to stdout');
error('still goes to stderr!');
// set all output to go via console.info
// overrides all per-namespace log settings
debug.log = console.info.bind(console);
error('now goes to stdout via console.info');
log('still goes to stdout, but via console.info now');Become a sponsor and get your logo on our README on Github with a link to your site. [Become a sponsor]
A light, featureful and explicit option parsing library for node.js.
Why another one? See below. tl;dr: The others I’ve tried are one of too loosey goosey (not explicit), too big/too many deps, or ill specified. YMMV.
Follow @trentmick for updates to node-dashdash.
npm install dashdash
var dashdash = require('dashdash');
// Specify the options. Minimally `name` (or `names`) and `type`
// must be given for each.
var options = [
{
// `names` or a single `name`. First element is the `opts.KEY`.
names: ['help', 'h'],
// See "Option specs" below for types.
type: 'bool',
help: 'Print this help and exit.'
}
];
// Shortcut form. As called it infers `process.argv`. See below for
// the longer form to use methods like `.help()` on the Parser object.
var opts = dashdash.parse({options: options});
console.log("opts:", opts);
console.log("args:", opts._args);A more realistic starter script “foo.js” is as follows. This also shows using parser.help() for formatted option help.
var dashdash = require('./lib/dashdash');
var options = [
{
name: 'version',
type: 'bool',
help: 'Print tool version and exit.'
},
{
names: ['help', 'h'],
type: 'bool',
help: 'Print this help and exit.'
},
{
names: ['verbose', 'v'],
type: 'arrayOfBool',
help: 'Verbose output. Use multiple times for more verbose.'
},
{
names: ['file', 'f'],
type: 'string',
help: 'File to process',
helpArg: 'FILE'
}
];
var parser = dashdash.createParser({options: options});
try {
var opts = parser.parse(process.argv);
} catch (e) {
console.error('foo: error: %s', e.message);
process.exit(1);
}
console.log("# opts:", opts);
console.log("# args:", opts._args);
// Use `parser.help()` for formatted options help.
if (opts.help) {
var help = parser.help({includeEnv: true}).trimRight();
console.log('usage: node foo.js [OPTIONS]\n'
+ 'options:\n'
+ help);
process.exit(0);
}
// ...Some example output from this script (foo.js):
$ node foo.js -h
# opts: { help: true,
_order: [ { name: 'help', value: true, from: 'argv' } ],
_args: [] }
# args: []
usage: node foo.js [OPTIONS]
options:
--version Print tool version and exit.
-h, --help Print this help and exit.
-v, --verbose Verbose output. Use multiple times for more verbose.
-f FILE, --file=FILE File to process
$ node foo.js -v
# opts: { verbose: [ true ],
_order: [ { name: 'verbose', value: true, from: 'argv' } ],
_args: [] }
# args: []
$ node foo.js --version arg1
# opts: { version: true,
_order: [ { name: 'version', value: true, from: 'argv' } ],
_args: [ 'arg1' ] }
# args: [ 'arg1' ]
$ node foo.js -f bar.txt
# opts: { file: 'bar.txt',
_order: [ { name: 'file', value: 'bar.txt', from: 'argv' } ],
_args: [] }
# args: []
$ node foo.js -vvv --file=blah
# opts: { verbose: [ true, true, true ],
file: 'blah',
_order:
[ { name: 'verbose', value: true, from: 'argv' },
{ name: 'verbose', value: true, from: 'argv' },
{ name: 'verbose', value: true, from: 'argv' },
{ name: 'file', value: 'blah', from: 'argv' } ],
_args: [] }
# args: []
See the “examples” dir for a number of starter examples using some of dashdash’s features.
If you want to allow environment variables to specify options to your tool, dashdash makes this easy. We can change the ‘verbose’ option in the example above to include an ‘env’ field:
{
names: ['verbose', 'v'],
type: 'arrayOfBool',
env: 'FOO_VERBOSE', // <--- add this line
help: 'Verbose output. Use multiple times for more verbose.'
},then the “FOO_VERBOSE” environment variable can be used to set this option:
$ FOO_VERBOSE=1 node foo.js
# opts: { verbose: [ true ],
_order: [ { name: 'verbose', value: true, from: 'env' } ],
_args: [] }
# args: []
Boolean options will interpret the empty string as unset, ‘0’ as false and anything else as true.
$ FOO_VERBOSE= node examples/foo.js # not set
# opts: { _order: [], _args: [] }
# args: []
$ FOO_VERBOSE=0 node examples/foo.js # '0' is false
# opts: { verbose: [ false ],
_order: [ { key: 'verbose', value: false, from: 'env' } ],
_args: [] }
# args: []
$ FOO_VERBOSE=1 node examples/foo.js # true
# opts: { verbose: [ true ],
_order: [ { key: 'verbose', value: true, from: 'env' } ],
_args: [] }
# args: []
$ FOO_VERBOSE=boogabooga node examples/foo.js # true
# opts: { verbose: [ true ],
_order: [ { key: 'verbose', value: true, from: 'env' } ],
_args: [] }
# args: []
Non-booleans can be used as well. Strings:
$ FOO_FILE=data.txt node examples/foo.js
# opts: { file: 'data.txt',
_order: [ { key: 'file', value: 'data.txt', from: 'env' } ],
_args: [] }
# args: []
Numbers:
$ FOO_TIMEOUT=5000 node examples/foo.js
# opts: { timeout: 5000,
_order: [ { key: 'timeout', value: 5000, from: 'env' } ],
_args: [] }
# args: []
$ FOO_TIMEOUT=blarg node examples/foo.js
foo: error: arg for "FOO_TIMEOUT" is not a positive integer: "blarg"
With the includeEnv: true config to parser.help() the environment variable can also be included in help output:
usage: node foo.js OPTIONS options: –version Print tool version and exit. -h, –help Print this help and exit. -v, –verbose Verbose output. Use multiple times for more verbose. Environment: FOO_VERBOSE=1 -f FILE, –file=FILE File to process
Dashdash provides a simple way to create a Bash completion file that you can place in your “bash_completion.d” directory – sometimes that is “/usr/local/etc/bash_completion.d/”). Features:
Dashdash will return bash completion file content given a parser instance:
var parser = dashdash.createParser({options: options}); console.log( parser.bashCompletion({name: ‘mycli’}) );
or directly from a options array of options specs:
var code = dashdash.bashCompletionFromOptions({ name: ‘mycli’, options: OPTIONS });
Write that content to “/usr/local/etc/bash_completion.d/mycli” and you will have Bash completions for mycli. Alternatively you can write it to any file (e.g. “~/.bashrc”) and source it.
You could add a --completion hidden option to your tool that emits the completion content and document for your users to call that to install Bash completions.
See examples/ddcompletion.js for a complete example, including how one can define bash functions for completion of custom option types. Also see node-cmdln for how it uses this for Bash completion for full multi-subcommand tools.
function complete\_FOO guide, completionTypeParser construction (i.e. dashdash.createParser(CONFIG)) takes the following fields:
options (Array of option specs). Required. See the Option specs section below.
interspersed (Boolean). Optional. Default is true. If true this allows interspersed arguments and options. I.e.:
node ./tool.js -v arg1 arg2 -h # '-h' is after interspersed args
Set it to false to have ‘-h’ not get parsed as an option in the above example.
allowUnknown (Boolean). Optional. Default is false. If false, this causes unknown arguments to throw an error. I.e.:
node ./tool.js -v arg1 --afe8asefksjefhas
Set it to true to treat the unknown option as a positional argument.
Caveat: When a shortopt group, such as -xaz contains a mix of known and unknown options, the entire group is passed through unmolested as a positional argument.
Consider if you have a known short option -a, and parse the following command line:
node ./tool.js -xaz
where -x and -z are unknown. There are multiple ways to interpret this:
-x takes a value: {x: 'az'}-x and -z are both booleans: {x:true,a:true,z:true}Since dashdash does not know what -x and -z are, it can’t know if you’d prefer to receive {a:true,_args:['-x','-z']} or {x:'az'}, or {_args:['-xaz']}. Leaving the positional arg unprocessed is the easiest mistake for the user to recover from.
Example using all fields (required fields are noted):
{
names: ['file', 'f'], // Required (one of `names` or `name`).
type: 'string', // Required.
completionType: 'filename',
env: 'MYTOOL_FILE',
help: 'Config file to load before running "mytool"',
helpArg: 'PATH',
helpWrap: false,
default: path.resolve(process.env.HOME, '.mytoolrc')
}Each option spec in the options array must/can have the following fields:
name (String) or names (Array). Required. These give the option name and aliases. The first name (if more than one given) is the key for the parsed opts object.
type (String). Required. One of:
YYYY-MM-DD[THH:MM:SS[.sss][Z]], e.g. “2014-03-28T18:35:01.489Z”)FWIW, these names attempt to match with asserts on assert-plus. You can add your own custom option types with dashdash.addOptionType. See below.
completionType (String). Optional. This is used for Bash completion for an option argument. If not specified, then the value of type is used. Any string may be specified, but only the following values have meaning:
none: Provide no completions.file: Bash’s default completion (i.e. complete -o default), which includes filenames.function complete_FOO Bash function is defined. This is for custom completions for a given tool. Typically these custom functions are provided in the specExtra argument to dashdash.bashCompletionFromOptions(). See “examples/ddcompletion.js” for an example.env (String or Array of String). Optional. An environment variable name (or names) that can be used as a fallback for this option. For example, given a “foo.js” like this:
var options = [{names: ['dry-run', 'n'], env: 'FOO_DRY_RUN'}];
var opts = dashdash.parse({options: options});
Both node foo.js --dry-run and FOO_DRY_RUN=1 node foo.js would result in opts.dry_run = true.
An environment variable is only used as a fallback, i.e. it is ignored if the associated option is given in argv.
help (String). Optional. Used for parser.help() output.
helpArg (String). Optional. Used in help output as the placeholder for the option argument, e.g. the “PATH” in:
...
-f PATH, --file=PATH File to process
...helpWrap (Boolean). Optional, default true. Set this to false to have that option’s help not be text wrapped in <parser>.help() output.
default. Optional. A default value used for this option, if the option isn’t specified in argv.
hidden (Boolean). Optional, default false. If true, help output will not include this option. See also the includeHidden option to bashCompletionFromOptions() for Bash completion.
You can add headings between option specs in the options array. To do so, simply add an object with only a group property – the string to print as the heading for the subsequent options in the array. For example:
var options = [
{
group: 'Armament Options'
},
{
names: [ 'weapon', 'w' ],
type: 'string'
},
{
group: 'General Options'
},
{
names: [ 'help', 'h' ],
type: 'bool'
}
];
...Note: You can use an empty string, {group: ''}, to get a blank line in help output between groups of options.
The parser.help(...) function is configurable as follows:
Options:
Armament Options:
^^ -w WEAPON, --weapon=WEAPON Weapon with which to crush. One of: |
/ sword, spear, maul |
/ General Options: |
/ -h, --help Print this help and exit. |
/ ^^^^ ^ |
\ `-- indent `-- helpCol maxCol ---'
`-- headingIndent
indent (Number or String). Default 4. Set to a number (for that many spaces) or a string for the literal indent.headingIndent (Number or String). Default half length of indent. Set to a number (for that many spaces) or a string for the literal indent. This indent applies to group heading lines, between normal option lines.nameSort (String). Default is ‘length’. By default the names are sorted to put the short opts first (i.e. ‘-h, –help’ preferred to ‘–help, -h’). Set to ‘none’ to not do this sorting.maxCol (Number). Default 80. Note that reflow is just done on whitespace so a long token in the option help can overflow maxCol.helpCol (Number). If not set a reasonable value will be determined between minHelpCol and maxHelpCol.minHelpCol (Number). Default 20.maxHelpCol (Number). Default 40.helpWrap (Boolean). Default true. Set to false to have option help strings not be textwrapped to the helpCol..maxCol range.includeEnv (Boolean). Default false. If the option has associated environment variables (via the env option spec attribute), then append mentioned of those envvars to the help string.includeDefault (Boolean). Default false. If the option has a default value (via the default option spec attribute, or a default on the option’s type), then a “Default: VALUE” string will be appended to the help string.Dashdash includes a good starter set of option types that it will parse for you. However, you can add your own via:
var dashdash = require(‘dashdash’); dashdash.addOptionType({ name: ‘…’, takesArg: true, helpArg: ‘…’, parseArg: function (option, optstr, arg) { … }, array: false, // optional arrayFlatten: false, // optional default: …, // optional completionType: … // optional });
For example, a simple option type that accepts ‘yes’, ‘y’, ‘no’ or ‘n’ as a boolean argument would look like:
var dashdash = require(‘dashdash’);
function parseYesNo(option, optstr, arg) { var argLower = arg.toLowerCase() if (~[‘yes’, ‘y’].indexOf(argLower)) { return true; } else if (~[‘no’, ‘n’].indexOf(argLower)) { return false; } else { throw new Error(format( ‘arg for “%s” is not “yes” or “no”: “%s”’, optstr, arg)); } }
dashdash.addOptionType({ name: ‘yesno’ takesArg: true, helpArg: ‘<yes|no>’, parseArg: parseYesNo });
var options = { {names: [‘answer’, ‘a’], type: ‘yesno’} }; var opts = dashdash.parse({options: options});
See “examples/custom-option-*.js” for other examples. See the addOptionType block comment in “lib/dashdash.js” for more details. Please let me know with an issue if you write a generally useful one.
Why another node.js option parsing lib?
nopt really is just for “tools like npm”. Implicit opts (e.g. ‘–no-foo’ works for every ‘–foo’). Can’t disable abbreviated opts. Can’t do multiple usages of same opt, e.g. ‘-vvv’ (I think). Can’t do grouped short opts.
optimist has surprise interpretation of options (at least to me). Implicit opts mean ambiguities and poor error handling for fat-fingering. process.exit calls makes it hard to use as a libary.
optparse Incomplete docs. Is this an attempted clone of Python’s optparse. Not clear. Some divergence. parser.on("name", ...) API is weird.
argparse Dep on underscore. No thanks just for option processing. find lib | wc -l -> 26. Overkill. Argparse is a bit different anyway. Not sure I want that.
posix-getopt No type validation. Though that isn’t a killer. AFAIK can’t have a long opt without a short alias. I.e. no getopt_long semantics. Also, no whizbang features like generated help output.
“commander.js”: I wrote a critique a while back. It seems fine, but last I checked had an outstanding bug that would prevent me from using it.
A querystring parsing and stringifying library with some added security.
Lead Maintainer: Jordan Harband
The qs module was originally created and maintained by TJ Holowaychuk.
var qs = require('qs');
var assert = require('assert');
var obj = qs.parse('a=c');
assert.deepEqual(obj, { a: 'c' });
var str = qs.stringify(obj);
assert.equal(str, 'a=c');qs allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets []. For example, the string 'foo[bar]=baz' converts to:
When using the plainObjects option the parsed value is returned as a null object, created via Object.create(null) and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:
var nullObject = qs.parse('a[hasOwnProperty]=b', { plainObjects: true });
assert.deepEqual(nullObject, { a: { hasOwnProperty: 'b' } });By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.
var protoObject = qs.parse('a[hasOwnProperty]=b', { allowPrototypes: true });
assert.deepEqual(protoObject, { a: { hasOwnProperty: 'b' } });URI encoded strings work too:
You can also nest your objects, like 'foo[bar][baz]=foobarbaz':
By default, when nesting objects qs will only parse up to 5 children deep. This means if you attempt to parse a string like 'a[b][c][d][e][f][g][h][i]=j' your resulting object will be:
var expected = {
a: {
b: {
c: {
d: {
e: {
f: {
'[g][h][i]': 'j'
}
}
}
}
}
}
};
var string = 'a[b][c][d][e][f][g][h][i]=j';
assert.deepEqual(qs.parse(string), expected);This depth can be overridden by passing a depth option to qs.parse(string, [options]):
var deep = qs.parse('a[b][c][d][e][f][g][h][i]=j', { depth: 1 });
assert.deepEqual(deep, { a: { b: { '[c][d][e][f][g][h][i]': 'j' } } });The depth limit helps mitigate abuse when qs is used to parse user input, and it is recommended to keep it a reasonably small number.
For similar reasons, by default qs will only parse up to 1000 parameters. This can be overridden by passing a parameterLimit option:
To bypass the leading question mark, use ignoreQueryPrefix:
var prefixed = qs.parse('?a=b&c=d', { ignoreQueryPrefix: true });
assert.deepEqual(prefixed, { a: 'b', c: 'd' });An optional delimiter can also be passed:
var delimited = qs.parse('a=b;c=d', { delimiter: ';' });
assert.deepEqual(delimited, { a: 'b', c: 'd' });Delimiters can be a regular expression too:
var regexed = qs.parse('a=b;c=d,e=f', { delimiter: /[;,]/ });
assert.deepEqual(regexed, { a: 'b', c: 'd', e: 'f' });Option allowDots can be used to enable dot notation:
var withDots = qs.parse('a.b=c', { allowDots: true });
assert.deepEqual(withDots, { a: { b: 'c' } });If you have to deal with legacy browsers or services, there’s also support for decoding percent-encoded octets as iso-8859-1:
var oldCharset = qs.parse('a=%A7', { charset: 'iso-8859-1' });
assert.deepEqual(oldCharset, { a: '§' });Some services add an initial utf8=✓ value to forms so that old Internet Explorer versions are more likely to submit the form as utf-8. Additionally, the server can check the value against wrong encodings of the checkmark character and detect that a query string or application/x-www-form-urlencoded body was not sent as utf-8, eg. if the form had an accept-charset parameter or the containing page had a different character set.
qs supports this mechanism via the charsetSentinel option. If specified, the utf8 parameter will be omitted from the returned object. It will be used to switch to iso-8859-1/utf-8 mode depending on how the checkmark is encoded.
Important: When you specify both the charset option and the charsetSentinel option, the charset will be overridden when the request contains a utf8 parameter from which the actual charset can be deduced. In that sense the charset will behave as the default charset rather than the authoritative charset.
var detectedAsUtf8 = qs.parse('utf8=%E2%9C%93&a=%C3%B8', {
charset: 'iso-8859-1',
charsetSentinel: true
});
assert.deepEqual(detectedAsUtf8, { a: 'ø' });
// Browsers encode the checkmark as ✓ when submitting as iso-8859-1:
var detectedAsIso8859_1 = qs.parse('utf8=%26%2310003%3B&a=%F8', {
charset: 'utf-8',
charsetSentinel: true
});
assert.deepEqual(detectedAsIso8859_1, { a: 'ø' });If you want to decode the &#...; syntax to the actual character, you can specify the interpretNumericEntities option as well:
var detectedAsIso8859_1 = qs.parse('a=%26%239786%3B', {
charset: 'iso-8859-1',
interpretNumericEntities: true
});
assert.deepEqual(detectedAsIso8859_1, { a: '☺' });It also works when the charset has been detected in charsetSentinel mode.
qs can also parse arrays using a similar [] notation:
You may specify an index as well:
Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, qs will compact a sparse array to only the existing values preserving their order:
Note that an empty string is also a value, and will be preserved:
var withEmptyString = qs.parse('a[]=&a[]=b');
assert.deepEqual(withEmptyString, { a: ['', 'b'] });
var withIndexedEmptyString = qs.parse('a[0]=b&a[1]=&a[2]=c');
assert.deepEqual(withIndexedEmptyString, { a: ['b', '', 'c'] });qs will also limit specifying indices in an array to a maximum index of 20. Any array members with an index of greater than 20 will instead be converted to an object with the index as the key. This is needed to handle cases when someone sent, for example, a[999999999] and it will take significant time to iterate over this huge array.
This limit can be overridden by passing an arrayLimit option:
var withArrayLimit = qs.parse('a[1]=b', { arrayLimit: 0 });
assert.deepEqual(withArrayLimit, { a: { '1': 'b' } });To disable array parsing entirely, set parseArrays to false.
var noParsingArrays = qs.parse('a[]=b', { parseArrays: false });
assert.deepEqual(noParsingArrays, { a: { '0': 'b' } });If you mix notations, qs will merge the two items into an object:
var mixedNotation = qs.parse('a[0]=b&a[b]=c');
assert.deepEqual(mixedNotation, { a: { '0': 'b', b: 'c' } });You can also create arrays of objects:
Some people use comma to join array, qs can parse it:
var arraysOfObjects = qs.parse('a=b,c', { comma: true })
assert.deepEqual(arraysOfObjects, { a: ['b', 'c'] })(this cannot convert nested objects, such as a={b:1},{c:d})
When stringifying, qs by default URI encodes output. Objects are stringified as you would expect:
assert.equal(qs.stringify({ a: 'b' }), 'a=b');
assert.equal(qs.stringify({ a: { b: 'c' } }), 'a%5Bb%5D=c');This encoding can be disabled by setting the encode option to false:
var unencoded = qs.stringify({ a: { b: 'c' } }, { encode: false });
assert.equal(unencoded, 'a[b]=c');Encoding can be disabled for keys by setting the encodeValuesOnly option to true:
var encodedValues = qs.stringify(
{ a: 'b', c: ['d', 'e=f'], f: [['g'], ['h']] },
{ encodeValuesOnly: true }
);
assert.equal(encodedValues,'a=b&c[0]=d&c[1]=e%3Df&f[0][0]=g&f[1][0]=h');This encoding can also be replaced by a custom encoding method set as encoder option:
var encoded = qs.stringify({ a: { b: 'c' } }, { encoder: function (str) {
// Passed in values `a`, `b`, `c`
return // Return encoded string
}})(Note: the encoder option does not apply if encode is false)
Analogue to the encoder there is a decoder option for parse to override decoding of properties and values:
var decoded = qs.parse('x=z', { decoder: function (str) {
// Passed in values `x`, `z`
return // Return decoded string
}})Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases will be URI encoded during real usage.
When arrays are stringified, by default they are given explicit indices:
You may override this by setting the indices option to false:
You may use the arrayFormat option to specify the format of the output array:
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'indices' })
// 'a[0]=b&a[1]=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'brackets' })
// 'a[]=b&a[]=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'repeat' })
// 'a=b&a=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'comma' })
// 'a=b,c'When objects are stringified, by default they use bracket notation:
You may override this to use dot notation by setting the allowDots option to true:
Empty strings and null values will omit the value, but the equals sign (=) remains in place:
Key with no values (such as an empty object or array) will return nothing:
assert.equal(qs.stringify({ a: [] }), '');
assert.equal(qs.stringify({ a: {} }), '');
assert.equal(qs.stringify({ a: [{}] }), '');
assert.equal(qs.stringify({ a: { b: []} }), '');
assert.equal(qs.stringify({ a: { b: {}} }), '');Properties that are set to undefined will be omitted entirely:
The query string may optionally be prepended with a question mark:
The delimiter may be overridden with stringify as well:
If you only want to override the serialization of Date objects, you can provide a serializeDate option:
var date = new Date(7);
assert.equal(qs.stringify({ a: date }), 'a=1970-01-01T00:00:00.007Z'.replace(/:/g, '%3A'));
assert.equal(
qs.stringify({ a: date }, { serializeDate: function (d) { return d.getTime(); } }),
'a=7'
);You may use the sort option to affect the order of parameter keys:
function alphabeticalSort(a, b) {
return a.localeCompare(b);
}
assert.equal(qs.stringify({ a: 'c', z: 'y', b : 'f' }, { sort: alphabeticalSort }), 'a=c&b=f&z=y');Finally, you can use the filter option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification:
function filterFunc(prefix, value) {
if (prefix == 'b') {
// Return an `undefined` value to omit a property.
return;
}
if (prefix == 'e[f]') {
return value.getTime();
}
if (prefix == 'e[g][0]') {
return value * 2;
}
return value;
}
qs.stringify({ a: 'b', c: 'd', e: { f: new Date(123), g: [2] } }, { filter: filterFunc });
// 'a=b&c=d&e[f]=123&e[g][0]=4'
qs.stringify({ a: 'b', c: 'd', e: 'f' }, { filter: ['a', 'e'] });
// 'a=b&e=f'
qs.stringify({ a: ['b', 'c', 'd'], e: 'f' }, { filter: ['a', 0, 2] });
// 'a[0]=b&a[2]=d'null valuesBy default, null values are treated like empty strings:
Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.
To distinguish between null values and empty strings use the strictNullHandling flag. In the result string the null values have no = sign:
var strictNull = qs.stringify({ a: null, b: '' }, { strictNullHandling: true });
assert.equal(strictNull, 'a&b=');To parse values without = back to null use the strictNullHandling flag:
var parsedStrictNull = qs.parse('a&b=', { strictNullHandling: true });
assert.deepEqual(parsedStrictNull, { a: null, b: '' });To completely skip rendering keys with null values, use the skipNulls flag:
var nullsSkipped = qs.stringify({ a: 'b', c: null}, { skipNulls: true });
assert.equal(nullsSkipped, 'a=b');If you’re communicating with legacy systems, you can switch to iso-8859-1 using the charset option:
Characters that don’t exist in iso-8859-1 will be converted to numeric entities, similar to what browsers do:
var numeric = qs.stringify({ a: '☺' }, { charset: 'iso-8859-1' });
assert.equal(numeric, 'a=%26%239786%3B');You can use the charsetSentinel option to announce the character by including an utf8=✓ parameter with the proper encoding if the checkmark, similar to what Ruby on Rails and others do when submitting forms.
var sentinel = qs.stringify({ a: '☺' }, { charsetSentinel: true });
assert.equal(sentinel, 'utf8=%E2%9C%93&a=%E2%98%BA');
var isoSentinel = qs.stringify({ a: 'æ' }, { charsetSentinel: true, charset: 'iso-8859-1' });
assert.equal(isoSentinel, 'utf8=%26%2310003%3B&a=%E6');By default the encoding and decoding of characters is done in utf-8, and iso-8859-1 support is also built in via the charset parameter.
If you wish to encode querystrings to a different character set (i.e. Shift JIS) you can use the qs-iconv library:
var encoder = require('qs-iconv/encoder')('shift_jis');
var shiftJISEncoded = qs.stringify({ a: 'こんにちは!' }, { encoder: encoder });
assert.equal(shiftJISEncoded, 'a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I');This also works for decoding of query strings:
var decoder = require('qs-iconv/decoder')('shift_jis');
var obj = qs.parse('a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I', { decoder: decoder });
assert.deepEqual(obj, { a: 'こんにちは!' });RFC3986 used as default option and encodes ’ ’ to %20 which is backward compatible. In the same time, output can be stringified as per RFC1738 with ’ ’ equal to ‘+’.
assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');
A querystring parsing and stringifying library with some added security.
Lead Maintainer: Jordan Harband
The qs module was originally created and maintained by TJ Holowaychuk.
var qs = require('qs');
var assert = require('assert');
var obj = qs.parse('a=c');
assert.deepEqual(obj, { a: 'c' });
var str = qs.stringify(obj);
assert.equal(str, 'a=c');qs allows you to create nested objects within your query strings, by surrounding the name of sub-keys with square brackets []. For example, the string 'foo[bar]=baz' converts to:
When using the plainObjects option the parsed value is returned as a null object, created via Object.create(null) and as such you should be aware that prototype methods will not exist on it and a user may set those names to whatever value they like:
var nullObject = qs.parse('a[hasOwnProperty]=b', { plainObjects: true });
assert.deepEqual(nullObject, { a: { hasOwnProperty: 'b' } });By default parameters that would overwrite properties on the object prototype are ignored, if you wish to keep the data from those fields either use plainObjects as mentioned above, or set allowPrototypes to true which will allow user input to overwrite those properties. WARNING It is generally a bad idea to enable this option as it can cause problems when attempting to use the properties that have been overwritten. Always be careful with this option.
var protoObject = qs.parse('a[hasOwnProperty]=b', { allowPrototypes: true });
assert.deepEqual(protoObject, { a: { hasOwnProperty: 'b' } });URI encoded strings work too:
You can also nest your objects, like 'foo[bar][baz]=foobarbaz':
By default, when nesting objects qs will only parse up to 5 children deep. This means if you attempt to parse a string like 'a[b][c][d][e][f][g][h][i]=j' your resulting object will be:
var expected = {
a: {
b: {
c: {
d: {
e: {
f: {
'[g][h][i]': 'j'
}
}
}
}
}
}
};
var string = 'a[b][c][d][e][f][g][h][i]=j';
assert.deepEqual(qs.parse(string), expected);This depth can be overridden by passing a depth option to qs.parse(string, [options]):
var deep = qs.parse('a[b][c][d][e][f][g][h][i]=j', { depth: 1 });
assert.deepEqual(deep, { a: { b: { '[c][d][e][f][g][h][i]': 'j' } } });The depth limit helps mitigate abuse when qs is used to parse user input, and it is recommended to keep it a reasonably small number.
For similar reasons, by default qs will only parse up to 1000 parameters. This can be overridden by passing a parameterLimit option:
To bypass the leading question mark, use ignoreQueryPrefix:
var prefixed = qs.parse('?a=b&c=d', { ignoreQueryPrefix: true });
assert.deepEqual(prefixed, { a: 'b', c: 'd' });An optional delimiter can also be passed:
var delimited = qs.parse('a=b;c=d', { delimiter: ';' });
assert.deepEqual(delimited, { a: 'b', c: 'd' });Delimiters can be a regular expression too:
var regexed = qs.parse('a=b;c=d,e=f', { delimiter: /[;,]/ });
assert.deepEqual(regexed, { a: 'b', c: 'd', e: 'f' });Option allowDots can be used to enable dot notation:
var withDots = qs.parse('a.b=c', { allowDots: true });
assert.deepEqual(withDots, { a: { b: 'c' } });If you have to deal with legacy browsers or services, there’s also support for decoding percent-encoded octets as iso-8859-1:
var oldCharset = qs.parse('a=%A7', { charset: 'iso-8859-1' });
assert.deepEqual(oldCharset, { a: '§' });Some services add an initial utf8=✓ value to forms so that old Internet Explorer versions are more likely to submit the form as utf-8. Additionally, the server can check the value against wrong encodings of the checkmark character and detect that a query string or application/x-www-form-urlencoded body was not sent as utf-8, eg. if the form had an accept-charset parameter or the containing page had a different character set.
qs supports this mechanism via the charsetSentinel option. If specified, the utf8 parameter will be omitted from the returned object. It will be used to switch to iso-8859-1/utf-8 mode depending on how the checkmark is encoded.
Important: When you specify both the charset option and the charsetSentinel option, the charset will be overridden when the request contains a utf8 parameter from which the actual charset can be deduced. In that sense the charset will behave as the default charset rather than the authoritative charset.
var detectedAsUtf8 = qs.parse('utf8=%E2%9C%93&a=%C3%B8', {
charset: 'iso-8859-1',
charsetSentinel: true
});
assert.deepEqual(detectedAsUtf8, { a: 'ø' });
// Browsers encode the checkmark as ✓ when submitting as iso-8859-1:
var detectedAsIso8859_1 = qs.parse('utf8=%26%2310003%3B&a=%F8', {
charset: 'utf-8',
charsetSentinel: true
});
assert.deepEqual(detectedAsIso8859_1, { a: 'ø' });If you want to decode the &#...; syntax to the actual character, you can specify the interpretNumericEntities option as well:
var detectedAsIso8859_1 = qs.parse('a=%26%239786%3B', {
charset: 'iso-8859-1',
interpretNumericEntities: true
});
assert.deepEqual(detectedAsIso8859_1, { a: '☺' });It also works when the charset has been detected in charsetSentinel mode.
qs can also parse arrays using a similar [] notation:
You may specify an index as well:
Note that the only difference between an index in an array and a key in an object is that the value between the brackets must be a number to create an array. When creating arrays with specific indices, qs will compact a sparse array to only the existing values preserving their order:
Note that an empty string is also a value, and will be preserved:
var withEmptyString = qs.parse('a[]=&a[]=b');
assert.deepEqual(withEmptyString, { a: ['', 'b'] });
var withIndexedEmptyString = qs.parse('a[0]=b&a[1]=&a[2]=c');
assert.deepEqual(withIndexedEmptyString, { a: ['b', '', 'c'] });qs will also limit specifying indices in an array to a maximum index of 20. Any array members with an index of greater than 20 will instead be converted to an object with the index as the key. This is needed to handle cases when someone sent, for example, a[999999999] and it will take significant time to iterate over this huge array.
This limit can be overridden by passing an arrayLimit option:
var withArrayLimit = qs.parse('a[1]=b', { arrayLimit: 0 });
assert.deepEqual(withArrayLimit, { a: { '1': 'b' } });To disable array parsing entirely, set parseArrays to false.
var noParsingArrays = qs.parse('a[]=b', { parseArrays: false });
assert.deepEqual(noParsingArrays, { a: { '0': 'b' } });If you mix notations, qs will merge the two items into an object:
var mixedNotation = qs.parse('a[0]=b&a[b]=c');
assert.deepEqual(mixedNotation, { a: { '0': 'b', b: 'c' } });You can also create arrays of objects:
Some people use comma to join array, qs can parse it:
var arraysOfObjects = qs.parse('a=b,c', { comma: true })
assert.deepEqual(arraysOfObjects, { a: ['b', 'c'] })(this cannot convert nested objects, such as a={b:1},{c:d})
When stringifying, qs by default URI encodes output. Objects are stringified as you would expect:
assert.equal(qs.stringify({ a: 'b' }), 'a=b');
assert.equal(qs.stringify({ a: { b: 'c' } }), 'a%5Bb%5D=c');This encoding can be disabled by setting the encode option to false:
var unencoded = qs.stringify({ a: { b: 'c' } }, { encode: false });
assert.equal(unencoded, 'a[b]=c');Encoding can be disabled for keys by setting the encodeValuesOnly option to true:
var encodedValues = qs.stringify(
{ a: 'b', c: ['d', 'e=f'], f: [['g'], ['h']] },
{ encodeValuesOnly: true }
);
assert.equal(encodedValues,'a=b&c[0]=d&c[1]=e%3Df&f[0][0]=g&f[1][0]=h');This encoding can also be replaced by a custom encoding method set as encoder option:
var encoded = qs.stringify({ a: { b: 'c' } }, { encoder: function (str) {
// Passed in values `a`, `b`, `c`
return // Return encoded string
}})(Note: the encoder option does not apply if encode is false)
Analogue to the encoder there is a decoder option for parse to override decoding of properties and values:
var decoded = qs.parse('x=z', { decoder: function (str) {
// Passed in values `x`, `z`
return // Return decoded string
}})Examples beyond this point will be shown as though the output is not URI encoded for clarity. Please note that the return values in these cases will be URI encoded during real usage.
When arrays are stringified, by default they are given explicit indices:
You may override this by setting the indices option to false:
You may use the arrayFormat option to specify the format of the output array:
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'indices' })
// 'a[0]=b&a[1]=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'brackets' })
// 'a[]=b&a[]=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'repeat' })
// 'a=b&a=c'
qs.stringify({ a: ['b', 'c'] }, { arrayFormat: 'comma' })
// 'a=b,c'When objects are stringified, by default they use bracket notation:
You may override this to use dot notation by setting the allowDots option to true:
Empty strings and null values will omit the value, but the equals sign (=) remains in place:
Key with no values (such as an empty object or array) will return nothing:
assert.equal(qs.stringify({ a: [] }), '');
assert.equal(qs.stringify({ a: {} }), '');
assert.equal(qs.stringify({ a: [{}] }), '');
assert.equal(qs.stringify({ a: { b: []} }), '');
assert.equal(qs.stringify({ a: { b: {}} }), '');Properties that are set to undefined will be omitted entirely:
The query string may optionally be prepended with a question mark:
The delimiter may be overridden with stringify as well:
If you only want to override the serialization of Date objects, you can provide a serializeDate option:
var date = new Date(7);
assert.equal(qs.stringify({ a: date }), 'a=1970-01-01T00:00:00.007Z'.replace(/:/g, '%3A'));
assert.equal(
qs.stringify({ a: date }, { serializeDate: function (d) { return d.getTime(); } }),
'a=7'
);You may use the sort option to affect the order of parameter keys:
function alphabeticalSort(a, b) {
return a.localeCompare(b);
}
assert.equal(qs.stringify({ a: 'c', z: 'y', b : 'f' }, { sort: alphabeticalSort }), 'a=c&b=f&z=y');Finally, you can use the filter option to restrict which keys will be included in the stringified output. If you pass a function, it will be called for each key to obtain the replacement value. Otherwise, if you pass an array, it will be used to select properties and array indices for stringification:
function filterFunc(prefix, value) {
if (prefix == 'b') {
// Return an `undefined` value to omit a property.
return;
}
if (prefix == 'e[f]') {
return value.getTime();
}
if (prefix == 'e[g][0]') {
return value * 2;
}
return value;
}
qs.stringify({ a: 'b', c: 'd', e: { f: new Date(123), g: [2] } }, { filter: filterFunc });
// 'a=b&c=d&e[f]=123&e[g][0]=4'
qs.stringify({ a: 'b', c: 'd', e: 'f' }, { filter: ['a', 'e'] });
// 'a=b&e=f'
qs.stringify({ a: ['b', 'c', 'd'], e: 'f' }, { filter: ['a', 0, 2] });
// 'a[0]=b&a[2]=d'null valuesBy default, null values are treated like empty strings:
Parsing does not distinguish between parameters with and without equal signs. Both are converted to empty strings.
To distinguish between null values and empty strings use the strictNullHandling flag. In the result string the null values have no = sign:
var strictNull = qs.stringify({ a: null, b: '' }, { strictNullHandling: true });
assert.equal(strictNull, 'a&b=');To parse values without = back to null use the strictNullHandling flag:
var parsedStrictNull = qs.parse('a&b=', { strictNullHandling: true });
assert.deepEqual(parsedStrictNull, { a: null, b: '' });To completely skip rendering keys with null values, use the skipNulls flag:
var nullsSkipped = qs.stringify({ a: 'b', c: null}, { skipNulls: true });
assert.equal(nullsSkipped, 'a=b');If you’re communicating with legacy systems, you can switch to iso-8859-1 using the charset option:
Characters that don’t exist in iso-8859-1 will be converted to numeric entities, similar to what browsers do:
var numeric = qs.stringify({ a: '☺' }, { charset: 'iso-8859-1' });
assert.equal(numeric, 'a=%26%239786%3B');You can use the charsetSentinel option to announce the character by including an utf8=✓ parameter with the proper encoding if the checkmark, similar to what Ruby on Rails and others do when submitting forms.
var sentinel = qs.stringify({ a: '☺' }, { charsetSentinel: true });
assert.equal(sentinel, 'utf8=%E2%9C%93&a=%E2%98%BA');
var isoSentinel = qs.stringify({ a: 'æ' }, { charsetSentinel: true, charset: 'iso-8859-1' });
assert.equal(isoSentinel, 'utf8=%26%2310003%3B&a=%E6');By default the encoding and decoding of characters is done in utf-8, and iso-8859-1 support is also built in via the charset parameter.
If you wish to encode querystrings to a different character set (i.e. Shift JIS) you can use the qs-iconv library:
var encoder = require('qs-iconv/encoder')('shift_jis');
var shiftJISEncoded = qs.stringify({ a: 'こんにちは!' }, { encoder: encoder });
assert.equal(shiftJISEncoded, 'a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I');This also works for decoding of query strings:
var decoder = require('qs-iconv/decoder')('shift_jis');
var obj = qs.parse('a=%82%B1%82%F1%82%C9%82%BF%82%CD%81I', { decoder: decoder });
assert.deepEqual(obj, { a: 'こんにちは!' });RFC3986 used as default option and encodes ’ ’ to %20 which is backward compatible. In the same time, output can be stringified as per RFC1738 with ’ ’ equal to ‘+’.
assert.equal(qs.stringify({ a: 'b c' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC3986' }), 'a=b%20c');
assert.equal(qs.stringify({ a: 'b c' }, { format : 'RFC1738' }), 'a=b+c');
Produces a string that represents array data in a text table.

Table data is described using an array (rows) of array (cells).
import tableImport from 'table';
const { table } = tableImport;
// Using commonjs?
// const {table} = require('table');
let data,
output;
data = [
['0A', '0B', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C']
];
/**
* @typedef {string} table~cell
*/
/**
* @typedef {table~cell[]} table~row
*/
/**
* @typedef {Object} table~columns
* @property {string} alignment Cell content alignment (enum: left, center, right) (default: left).
* @property {number} width Column width (default: auto).
* @property {number} truncate Number of characters are which the content will be truncated (default: Infinity).
* @property {number} paddingLeft Cell content padding width left (default: 1).
* @property {number} paddingRight Cell content padding width right (default: 1).
*/
/**
* @typedef {Object} table~border
* @property {string} topBody
* @property {string} topJoin
* @property {string} topLeft
* @property {string} topRight
* @property {string} bottomBody
* @property {string} bottomJoin
* @property {string} bottomLeft
* @property {string} bottomRight
* @property {string} bodyLeft
* @property {string} bodyRight
* @property {string} bodyJoin
* @property {string} joinBody
* @property {string} joinLeft
* @property {string} joinRight
* @property {string} joinJoin
*/
/**
* Used to dynamically tell table whether to draw a line separating rows or not.
* The default behavior is to always return true.
*
* @typedef {function} drawHorizontalLine
* @param {number} index
* @param {number} size
* @return {boolean}
*/
/**
* @typedef {Object} table~config
* @property {table~border} border
* @property {table~columns[]} columns Column specific configuration.
* @property {table~columns} columnDefault Default values for all columns. Column specific settings overwrite the default values.
* @property {table~drawHorizontalLine} drawHorizontalLine
*/
/**
* Generates a text table.
*
* @param {table~row[]} rows
* @param {table~config} config
* @return {String}
*/
output = table(data);
console.log(output);╔════╤════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────┼────╢
║ 1A │ 1B │ 1C ║
╟────┼────┼────╢
║ 2A │ 2B │ 2C ║
╚════╧════╧════╝
{string} config.columns[{number}].alignment property controls content horizontal alignment within a cell.
Valid values are: “left”, “right” and “center”.
let config,
data,
output;
data = [
['0A', '0B', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C']
];
config = {
columns: {
0: {
alignment: 'left',
width: 10
},
1: {
alignment: 'center',
width: 10
},
2: {
alignment: 'right',
width: 10
}
}
};
output = table(data, config);
console.log(output);╔════════════╤════════════╤════════════╗
║ 0A │ 0B │ 0C ║
╟────────────┼────────────┼────────────╢
║ 1A │ 1B │ 1C ║
╟────────────┼────────────┼────────────╢
║ 2A │ 2B │ 2C ║
╚════════════╧════════════╧════════════╝
{number} config.columns[{number}].width property restricts column width to a fixed width.
let data,
output,
options;
data = [
['0A', '0B', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C']
];
options = {
columns: {
1: {
width: 10
}
}
};
output = table(data, options);
console.log(output);╔════╤════════════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────────────┼────╢
║ 1A │ 1B │ 1C ║
╟────┼────────────┼────╢
║ 2A │ 2B │ 2C ║
╚════╧════════════╧════╝
{object} config.border property describes characters used to draw the table border.
let config,
data,
output;
data = [
['0A', '0B', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C']
];
config = {
border: {
topBody: `─`,
topJoin: `┬`,
topLeft: `┌`,
topRight: `┐`,
bottomBody: `─`,
bottomJoin: `┴`,
bottomLeft: `└`,
bottomRight: `┘`,
bodyLeft: `│`,
bodyRight: `│`,
bodyJoin: `│`,
joinBody: `─`,
joinLeft: `├`,
joinRight: `┤`,
joinJoin: `┼`
}
};
output = table(data, config);
console.log(output);┌────┬────┬────┐
│ 0A │ 0B │ 0C │
├────┼────┼────┤
│ 1A │ 1B │ 1C │
├────┼────┼────┤
│ 2A │ 2B │ 2C │
└────┴────┴────┘
{function} config.drawHorizontalLine property is a function that is called for every non-content row in the table. The result of the function {boolean} determines whether a row is drawn.
let data,
output,
options;
data = [
['0A', '0B', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C'],
['3A', '3B', '3C'],
['4A', '4B', '4C']
];
options = {
/**
* @typedef {function} drawHorizontalLine
* @param {number} index
* @param {number} size
* @return {boolean}
*/
drawHorizontalLine: (index, size) => {
return index === 0 || index === 1 || index === size - 1 || index === size;
}
};
output = table(data, options);
console.log(output);╔════╤════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────┼────╢
║ 1A │ 1B │ 1C ║
║ 2A │ 2B │ 2C ║
║ 3A │ 3B │ 3C ║
╟────┼────┼────╢
║ 4A │ 4B │ 4C ║
╚════╧════╧════╝
Horizontal lines inside the table are not drawn.
import {
table,
getBorderCharacters
} from 'table';
const data = [
['-rw-r--r--', '1', 'pandorym', 'staff', '1529', 'May 23 11:25', 'LICENSE'],
['-rw-r--r--', '1', 'pandorym', 'staff', '16327', 'May 23 11:58', 'README.md'],
['drwxr-xr-x', '76', 'pandorym', 'staff', '2432', 'May 23 12:02', 'dist'],
['drwxr-xr-x', '634', 'pandorym', 'staff', '20288', 'May 23 11:54', 'node_modules'],
['-rw-r--r--', '1,', 'pandorym', 'staff', '525688', 'May 23 11:52', 'package-lock.json'],
['-rw-r--r--@', '1', 'pandorym', 'staff', '2440', 'May 23 11:25', 'package.json'],
['drwxr-xr-x', '27', 'pandorym', 'staff', '864', 'May 23 11:25', 'src'],
['drwxr-xr-x', '20', 'pandorym', 'staff', '640', 'May 23 11:25', 'test'],
];
const config = {
singleLine: true
};
const output = table(data, config);
console.log(output);╔═════════════╤═════╤══════════╤═══════╤════════╤══════════════╤═══════════════════╗
║ -rw-r--r-- │ 1 │ pandorym │ staff │ 1529 │ May 23 11:25 │ LICENSE ║
║ -rw-r--r-- │ 1 │ pandorym │ staff │ 16327 │ May 23 11:58 │ README.md ║
║ drwxr-xr-x │ 76 │ pandorym │ staff │ 2432 │ May 23 12:02 │ dist ║
║ drwxr-xr-x │ 634 │ pandorym │ staff │ 20288 │ May 23 11:54 │ node_modules ║
║ -rw-r--r-- │ 1, │ pandorym │ staff │ 525688 │ May 23 11:52 │ package-lock.json ║
║ -rw-r--r--@ │ 1 │ pandorym │ staff │ 2440 │ May 23 11:25 │ package.json ║
║ drwxr-xr-x │ 27 │ pandorym │ staff │ 864 │ May 23 11:25 │ src ║
║ drwxr-xr-x │ 20 │ pandorym │ staff │ 640 │ May 23 11:25 │ test ║
╚═════════════╧═════╧══════════╧═══════╧════════╧══════════════╧═══════════════════╝
{number} config.columns[{number}].paddingLeft and {number} config.columns[{number}].paddingRight properties control content padding within a cell. Property value represents a number of whitespaces used to pad the content.
let config,
data,
output;
data = [
['0A', 'AABBCC', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C']
];
config = {
columns: {
0: {
paddingLeft: 3
},
1: {
width: 2,
paddingRight: 3
}
}
};
output = table(data, config);
console.log(output);╔══════╤══════╤════╗
║ 0A │ AA │ 0C ║
║ │ BB │ ║
║ │ CC │ ║
╟──────┼──────┼────╢
║ 1A │ 1B │ 1C ║
╟──────┼──────┼────╢
║ 2A │ 2B │ 2C ║
╚══════╧══════╧════╝
### Predefined Border Templates
You can load one of the predefined border templates using getBorderCharacters function.
import {
table,
getBorderCharacters
} from 'table';
let config,
data;
data = [
['0A', '0B', '0C'],
['1A', '1B', '1C'],
['2A', '2B', '2C']
];
config = {
border: getBorderCharacters(`name of the template`)
};
table(data, config);# honeywell
╔════╤════╤════╗
║ 0A │ 0B │ 0C ║
╟────┼────┼────╢
║ 1A │ 1B │ 1C ║
╟────┼────┼────╢
║ 2A │ 2B │ 2C ║
╚════╧════╧════╝
# norc
┌────┬────┬────┐
│ 0A │ 0B │ 0C │
├────┼────┼────┤
│ 1A │ 1B │ 1C │
├────┼────┼────┤
│ 2A │ 2B │ 2C │
└────┴────┴────┘
# ramac (ASCII; for use in terminals that do not support Unicode characters)
+----+----+----+
| 0A | 0B | 0C |
|----|----|----|
| 1A | 1B | 1C |
|----|----|----|
| 2A | 2B | 2C |
+----+----+----+
# void (no borders; see "borderless table" section of the documentation)
0A 0B 0C
1A 1B 1C
2A 2B 2C
Raise an issue if you’d like to contribute a new border template.
Simply using “void” border character template creates a table with a lot of unnecessary spacing.
To create a more plesant to the eye table, reset the padding and remove the joining rows, e.g.
let output;
output = table(data, {
border: getBorderCharacters(`void`),
columnDefault: {
paddingLeft: 0,
paddingRight: 1
},
drawHorizontalLine: () => {
return false
}
});
console.log(output);0A 0B 0C
1A 1B 1C
2A 2B 2C
table package exports createStream function used to draw a table and append rows.
createStream requires {number} columnDefault.width and {number} columnCount configuration properties.
import {
createStream
} from 'table';
let config,
stream;
config = {
columnDefault: {
width: 50
},
columnCount: 1
};
stream = createStream(config);
setInterval(() => {
stream.write([new Date()]);
}, 500);
table package uses ANSI escape codes to overwrite the output of the last line when a new row is printed.
The underlying implementation is explained in this Stack Overflow answer.
Streaming supports all of the configuration properties and functionality of a static table (such as auto text wrapping, alignment and padding), e.g.
import {
createStream
} from 'table';
import _ from 'lodash';
let config,
stream,
i;
config = {
columnDefault: {
width: 50
},
columnCount: 3,
columns: {
0: {
width: 10,
alignment: 'right'
},
1: {
alignment: 'center',
},
2: {
width: 10
}
}
};
stream = createStream(config);
i = 0;
setInterval(() => {
let random;
random = _.sample('abcdefghijklmnopqrstuvwxyz', _.random(1, 30)).join('');
stream.write([i++, new Date(), random]);
}, 500);
To handle a content that overflows the container width, table package implements text wrapping. However, sometimes you may want to truncate content that is too long to be displayed in the table.
{number} config.columns[{number}].truncate property (default: Infinity) truncates the text at the specified length.
let config,
data,
output;
data = [
['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.']
];
config = {
columns: {
0: {
width: 20,
truncate: 100
}
}
};
output = table(data, config);
console.log(output);╔══════════════════════╗
║ Lorem ipsum dolor si ║
║ t amet, consectetur ║
║ adipiscing elit. Pha ║
║ sellus pulvinar nibh ║
║ sed mauris conva... ║
╚══════════════════════╝
table package implements auto text wrapping, i.e. text that has width greater than the container width will be separated into multiple lines, e.g.
let config,
data,
output;
data = [
['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.']
];
config = {
columns: {
0: {
width: 20
}
}
};
output = table(data, config);
console.log(output);╔══════════════════════╗
║ Lorem ipsum dolor si ║
║ t amet, consectetur ║
║ adipiscing elit. Pha ║
║ sellus pulvinar nibh ║
║ sed mauris convallis ║
║ dapibus. Nunc venena ║
║ tis tempus nulla sit ║
║ amet viverra. ║
╚══════════════════════╝
When wrapWord is true the text is broken at the nearest space or one of the special characters (“-”, "_“,”", “/”, “.”, “,”, “;”), e.g.
let config,
data,
output;
data = [
['Lorem ipsum dolor sit amet, consectetur adipiscing elit. Phasellus pulvinar nibh sed mauris convallis dapibus. Nunc venenatis tempus nulla sit amet viverra.']
];
config = {
columns: {
0: {
width: 20,
wrapWord: true
}
}
};
output = table(data, config);
console.log(output);╔══════════════════════╗
║ Lorem ipsum dolor ║
║ sit amet, ║
║ consectetur ║
║ adipiscing elit. ║
║ Phasellus pulvinar ║
║ nibh sed mauris ║
║ convallis dapibus. ║
║ Nunc venenatis ║
║ tempus nulla sit ║
║ amet viverra. ║
╚══════════════════════╝
Cosmiconfig searches for and loads configuration for your program.
It features smart defaults based on conventional expectations in the JavaScript ecosystem. But it’s also flexible enough to search wherever you’d like to search, and load whatever you’d like to load.
By default, Cosmiconfig will start where you tell it to start and search up the directory tree for the following:
package.json property.json, .yaml, .yml, .js, or .cjs.config.js or .config.cjs CommonJS moduleFor example, if your module’s name is “myapp”, cosmiconfig will search up the directory tree for configuration in the following places:
myapp property in package.json.myapprc file in JSON or YAML format.myapprc.json, .myapprc.yaml, .myapprc.yml, .myapprc.js, or .myapprc.cjs filemyapp.config.js or myapp.config.cjs CommonJS module exporting an objectCosmiconfig continues to search up the directory tree, checking each of these places in each directory, until it finds some acceptable configuration (or hits the home directory).
npm install cosmiconfig
Tested in Node 10+.
Create a Cosmiconfig explorer, then either search for or directly load a configuration file.
const { cosmiconfig, cosmiconfigSync } = require('cosmiconfig');
// ...
const explorer = cosmiconfig(moduleName);
// Search for a configuration by walking up directories.
// See documentation for search, below.
explorer.search()
.then((result) => {
// result.config is the parsed configuration object.
// result.filepath is the path to the config file that was found.
// result.isEmpty is true if there was nothing to parse in the config file.
})
.catch((error) => {
// Do something constructive.
});
// Load a configuration directly when you know where it should be.
// The result object is the same as for search.
// See documentation for load, below.
explorer.load(pathToConfig).then(..);
// You can also search and load synchronously.
const explorerSync = cosmiconfigSync(moduleName);
const searchedFor = explorerSync.search();
const loaded = explorerSync.load(pathToConfig);The result object you get from search or load has the following properties:
undefined if the file is empty.true if the configuration file is empty. This property will not be present if the configuration file is not empty.const { cosmiconfig } = require('cosmiconfig');
const explorer = cosmiconfig(moduleName[, cosmiconfigOptions])Creates a cosmiconfig instance (“explorer”) configured according to the arguments, and initializes its caches.
Type: string. Required.
Your module name. This is used to create the default searchPlaces and packageProp.
If your searchPlaces value will include files, as it does by default (e.g. ${moduleName}rc), your moduleName must consist of characters allowed in filenames. That means you should not copy scoped package names, such as @my-org/my-package, directly into moduleName.
cosmiconfigOptions are documented below. You may not need them, and should first read about the functions you’ll use.
Searches for a configuration file. Returns a Promise that resolves with a result or with null, if no configuration file is found.
You can do the same thing synchronously with explorerSync.search().
Let’s say your module name is goldengrahams so you initialized with const explorer = cosmiconfig('goldengrahams');. Here’s how your default search() will work:
process.cwd() (or some other directory defined by the searchFrom argument to search()), look for configuration objects in the following places:
goldengrahams property in a package.json file..goldengrahamsrc file with JSON or YAML syntax..goldengrahamsrc.json, .goldengrahamsrc.yaml, .goldengrahamsrc.yml, .goldengrahamsrc.js, or .goldengrahamsrc.cjs file.goldengrahams.config.js or goldengrahams.config.cjs CommonJS module exporting the object../, ../, ../../, ../../../, etc., checking the same places in each directory.stopDir).search() Promise resolves with its result (or, with explorerSync.search(), the result is returned).search() Promise resolves with null (or, with explorerSync.search(), null is returned).search() Promise rejects with that error (so you should .catch() it). (Or, with explorerSync.search(), the error is thrown.)If you know exactly where your configuration file should be, you can use load(), instead.
The search process is highly customizable. Use the cosmiconfig options searchPlaces and loaders to precisely define where you want to look for configurations and how you want to load them.
Type: string. Default: process.cwd().
A filename. search() will start its search here.
If the value is a directory, that’s where the search starts. If it’s a file, the search starts in that file’s directory.
Loads a configuration file. Returns a Promise that resolves with a result or rejects with an error (if the file does not exist or cannot be loaded).
Use load if you already know where the configuration file is and you just need to load it.
If you load a package.json file, the result will be derived from whatever property is specified as your packageProp.
You can do the same thing synchronously with explorerSync.load().
Clears the cache used in load().
Clears the cache used in search().
Performs both clearLoadCache() and clearSearchCache().
const { cosmiconfigSync } = require('cosmiconfig');
const explorerSync = cosmiconfigSync(moduleName[, cosmiconfigOptions])Creates a synchronous cosmiconfig instance (“explorerSync”) configured according to the arguments, and initializes its caches.
See cosmiconfig().
Synchronous version of explorer.search().
Returns a result or null.
Synchronous version of explorer.load().
Returns a result.
Clears the cache used in load().
Clears the cache used in search().
Performs both clearLoadCache() and clearSearchCache().
Type: Object.
Possible options are documented below.
Type: Array<string>. Default: See below.
An array of places that search() will check in each directory as it moves up the directory tree. Each place is relative to the directory being searched, and the places are checked in the specified order.
Default searchPlaces:
[
'package.json',
`.${moduleName}rc`,
`.${moduleName}rc.json`,
`.${moduleName}rc.yaml`,
`.${moduleName}rc.yml`,
`.${moduleName}rc.js`,
`.${moduleName}rc.cjs`,
`${moduleName}.config.js`,
`${moduleName}.config.cjs`,
]Create your own array to search more, fewer, or altogether different places.
Every item in searchPlaces needs to have a loader in loaders that corresponds to its extension. (Common extensions are covered by default loaders.) Read more about loaders below.
package.json is a special value: When it is included in searchPlaces, Cosmiconfig will always parse it as JSON and load a property within it, not the whole file. That property is defined with the packageProp option, and defaults to your module name.
Examples, with a module named porgy:
// Disallow extensions on rc files:
[
'package.json',
'.porgyrc',
'porgy.config.js'
]
// ESLint searches for configuration in these places:
[
'.eslintrc.js',
'.eslintrc.yaml',
'.eslintrc.yml',
'.eslintrc.json',
'.eslintrc',
'package.json'
]
// Babel looks in fewer places:
[
'package.json',
'.babelrc'
]
// Maybe you want to look for a wide variety of JS flavors:
[
'porgy.config.js',
'porgy.config.mjs',
'porgy.config.ts',
'porgy.config.coffee'
]
// ^^ You will need to designate custom loaders to tell
// Cosmiconfig how to handle these special JS flavors.
// Look within a .config/ subdirectory of every searched directory:
[
'package.json',
'.porgyrc',
'.config/.porgyrc',
'.porgyrc.json',
'.config/.porgyrc.json'
]Type: Object. Default: See below.
An object that maps extensions to the loader functions responsible for loading and parsing files with those extensions.
Cosmiconfig exposes its default loaders on a named export defaultLoaders.
Default loaders:
const { defaultLoaders } = require('cosmiconfig');
console.log(Object.entries(defaultLoaders))
// [
// [ '.cjs', [Function: loadJs] ],
// [ '.js', [Function: loadJs] ],
// [ '.json', [Function: loadJson] ],
// [ '.yaml', [Function: loadYaml] ],
// [ '.yml', [Function: loadYaml] ],
// [ 'noExt', [Function: loadYaml] ]
// ](YAML is a superset of JSON; which means YAML parsers can parse JSON; which is how extensionless files can be either YAML or JSON with only one parser.)
If you provide a loaders object, your object will be merged with the defaults. So you can override one or two without having to override them all.
Keys in loaders are extensions (starting with a period), or noExt to specify the loader for files without extensions, like .myapprc.
Values in loaders are a loader function (described below) whose values are loader functions.
The most common use case for custom loaders value is to load extensionless rc files as strict JSON, instead of JSON or YAML (the default). To accomplish that, provide the following loaders value:
If you want to load files that are not handled by the loader functions Cosmiconfig exposes, you can write a custom loader function or use one from NPM if it exists.
Third-party loaders:
Use cases for custom loader function:
.mjs configuration files.Custom loader functions have the following signature:
// Sync
(filepath: string, content: string) => Object | null
// Async
(filepath: string, content: string) => Object | null | Promise<Object | null>Cosmiconfig reads the file when it checks whether the file exists, so it will provide you with both the file’s path and its content. Do whatever you need to, and return either a configuration object or null (or, for async-only loaders, a Promise that resolves with one of those). null indicates that no real configuration was found and the search should continue.
A few things to note:
cosmiconfigSync()).require hook, because defaultLoaders['.js'] just uses require. Whether you use custom loaders or a require hook is up to you.Examples:
// Allow JSON5 syntax:
{
'.json': json5Loader
}
// Allow a special configuration syntax of your own creation:
{
'.special': specialLoader
}
// Allow many flavors of JS, using custom loaders:
{
'.mjs': esmLoader,
'.ts': typeScriptLoader,
'.coffee': coffeeScriptLoader
}
// Allow many flavors of JS but rely on require hooks:
{
'.mjs': defaultLoaders['.js'],
'.ts': defaultLoaders['.js'],
'.coffee': defaultLoaders['.js']
}Type: string | Array<string>. Default: `${moduleName}`.
Name of the property in package.json to look for.
Use a period-delimited string or an array of strings to describe a path to nested properties.
For example, the value 'configs.myPackage' or ['configs', 'myPackage'] will get you the "myPackage" value in a package.json like this:
If nested property names within the path include periods, you need to use an array of strings. For example, the value ['configs', 'foo.bar', 'baz'] will get you the "baz" value in a package.json like this:
If a string includes period but corresponds to a top-level property name, it will not be interpreted as a period-delimited path. For example, the value 'one.two' will get you the "three" value in a package.json like this:
Type: string. Default: Absolute path to your home directory.
Directory where the search will stop.
Type: boolean. Default: true.
If false, no caches will be used. Read more about “Caching” below.
Type: (Result) => Promise<Result> | Result.
A function that transforms the parsed configuration. Receives the result.
If using search() or load() (which are async), the transform function can return the transformed result or return a Promise that resolves with the transformed result. If using cosmiconfigSync, search() or load(), the function must be synchronous and return the transformed result.
The reason you might use this option — instead of simply applying your transform function some other way — is that the transformed result will be cached. If your transformation involves additional filesystem I/O or other potentially slow processing, you can use this option to avoid repeating those steps every time a given configuration is searched or loaded.
Type: boolean. Default: true.
By default, if search() encounters an empty file (containing nothing but whitespace) in one of the searchPlaces, it will ignore the empty file and move on. If you’d like to load empty configuration files, instead, set this option to false.
Why might you want to load empty configuration files? If you want to throw an error, or if an empty configuration file means something to your program.
As of v2, cosmiconfig uses caching to reduce the need for repetitious reading of the filesystem or expensive transforms. Every new cosmiconfig instance (created with cosmiconfig()) has its own caches.
To avoid or work around caching, you can do the following:
cosmiconfig option cache to false.clearLoadCache(), clearSearchCache(), and clearCaches().rc serves its focused purpose well. cosmiconfig differs in a few key ways — making it more useful for some projects, less useful for others:
package.json property, an rc file, a .config.js file, and rc files with extensions.Please note that this project is released with a Contributor Code of Conduct. By participating in this project you agree to abide by its terms.
And please do participate!
This is Google’s officially supported node.js client library for using OAuth 2.0 authorization and authentication with Google APIs.
This library is distributed on npm. To add it as a dependency, run the following command:
This library provides a variety of ways to authenticate to your Google services. - Application Default Credentials - Use Application Default Credentials when you use a single identity for all users in your application. Especially useful for applications running on Google Cloud. - OAuth 2 - Use OAuth2 when you need to perform actions on behalf of the end user. - JSON Web Tokens - Use JWT when you are using a single identity for all users. Especially useful for server->server or server->API communication. - Google Compute - Directly use a service account on Google Cloud Platform. Useful for server->server or server->API communication.
This library provides an implementation of Application Default Credentials for Node.js. The Application Default Credentials provide a simple way to get authorization credentials for use in calling Google APIs.
They are best suited for cases when the call needs to have the same identity and authorization level for the application independent of the user. This is the recommended approach to authorize calls to Cloud APIs, particularly when you’re building an application that uses Google Cloud Platform.
To use Application Default Credentials, You first need to download a set of JSON credentials for your project. Go to APIs & Auth > Credentials in the Google Developers Console and select Service account from the Add credentials dropdown.
This file is your only copy of these credentials. It should never be committed with your source code, and should be stored securely.
Once downloaded, store the path to this file in the GOOGLE_APPLICATION_CREDENTIALS environment variable.
Before making your API call, you must be sure the API you’re calling has been enabled. Go to APIs & Auth > APIs in the Google Developers Console and enable the APIs you’d like to call. For the example below, you must enable the DNS API.
Rather than manually creating an OAuth2 client, JWT client, or Compute client, the auth library can create the correct credential type for you, depending upon the environment your code is running under.
For example, a JWT auth client will be created when your code is running on your local developer machine, and a Compute client will be created when the same code is running on Google Cloud Platform. If you need a specific set of scopes, you can pass those in the form of a string or an array to the GoogleAuth constructor.
The code below shows how to retrieve a default credential type, depending upon the runtime environment.
const {GoogleAuth} = require('google-auth-library');
/**
* Instead of specifying the type of client you'd like to use (JWT, OAuth2, etc)
* this library will automatically choose the right client based on the environment.
*/
async function main() {
const auth = new GoogleAuth({
scopes: 'https://www.googleapis.com/auth/cloud-platform'
});
const client = await auth.getClient();
const projectId = await auth.getProjectId();
const url = `https://dns.googleapis.com/dns/v1/projects/${projectId}`;
const res = await client.request({ url });
console.log(res.data);
}
main().catch(console.error);This library comes with an OAuth2 client that allows you to retrieve an access token and refreshes the token and retry the request seamlessly if you also provide an expiry_date and the token is expired. The basics of Google’s OAuth2 implementation is explained on Google Authorization and Authentication documentation.
In the following examples, you may need a CLIENT_ID, CLIENT_SECRET and REDIRECT_URL. You can find these pieces of information by going to the Developer Console, clicking your project > APIs & auth > credentials.
For more information about OAuth2 and how it works, see here.
Let’s take a look at a complete example.
const {OAuth2Client} = require('google-auth-library');
const http = require('http');
const url = require('url');
const open = require('open');
const destroyer = require('server-destroy');
// Download your OAuth2 configuration from the Google
const keys = require('./oauth2.keys.json');
/**
* Start by acquiring a pre-authenticated oAuth2 client.
*/
async function main() {
const oAuth2Client = await getAuthenticatedClient();
// Make a simple request to the People API using our pre-authenticated client. The `request()` method
// takes an GaxiosOptions object. Visit https://github.com/JustinBeckwith/gaxios.
const url = 'https://people.googleapis.com/v1/people/me?personFields=names';
const res = await oAuth2Client.request({url});
console.log(res.data);
// After acquiring an access_token, you may want to check on the audience, expiration,
// or original scopes requested. You can do that with the `getTokenInfo` method.
const tokenInfo = await oAuth2Client.getTokenInfo(
oAuth2Client.credentials.access_token
);
console.log(tokenInfo);
}
/**
* Create a new OAuth2Client, and go through the OAuth2 content
* workflow. Return the full client to the callback.
*/
function getAuthenticatedClient() {
return new Promise((resolve, reject) => {
// create an oAuth client to authorize the API call. Secrets are kept in a `keys.json` file,
// which should be downloaded from the Google Developers Console.
const oAuth2Client = new OAuth2Client(
keys.web.client_id,
keys.web.client_secret,
keys.web.redirect_uris[0]
);
// Generate the url that will be used for the consent dialog.
const authorizeUrl = oAuth2Client.generateAuthUrl({
access_type: 'offline',
scope: 'https://www.googleapis.com/auth/userinfo.profile',
});
// Open an http server to accept the oauth callback. In this simple example, the
// only request to our webserver is to /oauth2callback?code=<code>
const server = http
.createServer(async (req, res) => {
try {
if (req.url.indexOf('/oauth2callback') > -1) {
// acquire the code from the querystring, and close the web server.
const qs = new url.URL(req.url, 'http://localhost:3000')
.searchParams;
const code = qs.get('code');
console.log(`Code is ${code}`);
res.end('Authentication successful! Please return to the console.');
server.destroy();
// Now that we have the code, use that to acquire tokens.
const r = await oAuth2Client.getToken(code);
// Make sure to set the credentials on the OAuth2 client.
oAuth2Client.setCredentials(r.tokens);
console.info('Tokens acquired.');
resolve(oAuth2Client);
}
} catch (e) {
reject(e);
}
})
.listen(3000, () => {
// open the browser to the authorize url to start the workflow
open(authorizeUrl, {wait: false}).then(cp => cp.unref());
});
destroyer(server);
});
}
main().catch(console.error);This library will automatically obtain an access_token, and automatically refresh the access_token if a refresh_token is present. The refresh_token is only returned on the first authorization, so if you want to make sure you store it safely. An easy way to make sure you always store the most recent tokens is to use the tokens event:
const client = await auth.getClient();
client.on('tokens', (tokens) => {
if (tokens.refresh_token) {
// store the refresh_token in my database!
console.log(tokens.refresh_token);
}
console.log(tokens.access_token);
});
const url = `https://dns.googleapis.com/dns/v1/projects/${projectId}`;
const res = await client.request({ url });
// The `tokens` event would now be raised if this was the first requestWith the code returned, you can ask for an access token as shown below:
const tokens = await oauth2Client.getToken(code);
// Now tokens contains an access_token and an optional refresh_token. Save them.
oauth2Client.setCredentials(tokens);If you need to obtain a new refresh_token, ensure the call to generateAuthUrl sets the access_type to offline. The refresh token will only be returned for the first authorization by the user. To force consent, set the prompt property to consent:
// Generate the url that will be used for the consent dialog.
const authorizeUrl = oAuth2Client.generateAuthUrl({
// To get a refresh token, you MUST set access_type to `offline`.
access_type: 'offline',
// set the appropriate scopes
scope: 'https://www.googleapis.com/auth/userinfo.profile',
// A refresh token is only returned the first time the user
// consents to providing access. For illustration purposes,
// setting the prompt to 'consent' will force this consent
// every time, forcing a refresh_token to be returned.
prompt: 'consent'
});access_token informationAfter obtaining and storing an access_token, at a later time you may want to go check the expiration date, original scopes, or audience for the token. To get the token info, you can use the getTokenInfo method:
// after acquiring an oAuth2Client...
const tokenInfo = await oAuth2Client.getTokenInfo('my-access-token');
// take a look at the scopes originally provisioned for the access token
console.log(tokenInfo.scopes);This method will throw if the token is invalid.
If you’re authenticating with OAuth2 from an installed application (like Electron), you may not want to embed your client_secret inside of the application sources. To work around this restriction, you can choose the iOS application type when creating your OAuth2 credentials in the Google Developers console:

If using the iOS type, when creating the OAuth2 client you won’t need to pass a client_secret into the constructor:
const oAuth2Client = new OAuth2Client({
clientId: <your_client_id>,
redirectUri: <your_redirect_uri>
});The Google Developers Console provides a .json file that you can use to configure a JWT auth client and authenticate your requests, for example when using a service account.
const {JWT} = require('google-auth-library');
const keys = require('./jwt.keys.json');
async function main() {
const client = new JWT({
email: keys.client_email,
key: keys.private_key,
scopes: ['https://www.googleapis.com/auth/cloud-platform'],
});
const url = `https://dns.googleapis.com/dns/v1/projects/${keys.project_id}`;
const res = await client.request({url});
console.log(res.data);
}
main().catch(console.error);The parameters for the JWT auth client including how to use it with a .pem file are explained in samples/jwt.js.
Instead of loading credentials from a key file, you can also provide them using an environment variable and the GoogleAuth.fromJSON() method. This is particularly convenient for systems that deploy directly from source control (Heroku, App Engine, etc).
Start by exporting your credentials:
$ export CREDS='{
"type": "service_account",
"project_id": "your-project-id",
"private_key_id": "your-private-key-id",
"private_key": "your-private-key",
"client_email": "your-client-email",
"client_id": "your-client-id",
"auth_uri": "https://accounts.google.com/o/oauth2/auth",
"token_uri": "https://accounts.google.com/o/oauth2/token",
"auth_provider_x509_cert_url": "https://www.googleapis.com/oauth2/v1/certs",
"client_x509_cert_url": "your-cert-url"
}'
Now you can create a new client from the credentials:
const {auth} = require('google-auth-library');
// load the environment variable with our keys
const keysEnvVar = process.env['CREDS'];
if (!keysEnvVar) {
throw new Error('The $CREDS environment variable was not found!');
}
const keys = JSON.parse(keysEnvVar);
async function main() {
// load the JWT or UserRefreshClient from the keys
const client = auth.fromJSON(keys);
client.scopes = ['https://www.googleapis.com/auth/cloud-platform'];
const url = `https://dns.googleapis.com/dns/v1/projects/${keys.project_id}`;
const res = await client.request({url});
console.log(res.data);
}
main().catch(console.error);You can set the HTTPS_PROXY or https_proxy environment variables to proxy HTTPS requests. When HTTPS_PROXY or https_proxy are set, they will be used to proxy SSL requests that do not have an explicit proxy configuration option present.
If your application is running on Google Cloud Platform, you can authenticate using the default service account or by specifying a specific service account.
Note: In most cases, you will want to use Application Default Credentials. Direct use of the Compute class is for very specific scenarios.
const {auth, Compute} = require('google-auth-library');
async function main() {
const client = new Compute({
// Specifying the service account email is optional.
serviceAccountEmail: 'my-service-account@example.com'
});
const projectId = await auth.getProjectId();
const url = `https://dns.googleapis.com/dns/v1/projects/${projectId}`;
const res = await client.request({url});
console.log(res.data);
}
main().catch(console.error);If your application is running on Cloud Run or Cloud Functions, or using Cloud Identity-Aware Proxy (IAP), you will need to fetch an ID token to access your application. For this, use the method getIdTokenClient on the GoogleAuth client.
For invoking Cloud Run services, your service account will need the Cloud Run Invoker IAM permission.
For invoking Cloud Functions, your service account will need the Function Invoker IAM permission.
// Make a request to a protected Cloud Run service.
const {GoogleAuth} = require('google-auth-library');
async function main() {
const url = 'https://cloud-run-1234-uc.a.run.app';
const auth = new GoogleAuth();
const client = await auth.getIdTokenClient(url);
const res = await client.request({url});
console.log(res.data);
}
main().catch(console.error);A complete example can be found in samples/idtokens-serverless.js.
For invoking Cloud Identity-Aware Proxy, you will need to pass the Client ID used when you set up your protected resource as the target audience.
// Make a request to a protected Cloud Identity-Aware Proxy (IAP) resource
const {GoogleAuth} = require('google-auth-library');
async function main()
const targetAudience = 'iap-client-id';
const url = 'https://iap-url.com';
const auth = new GoogleAuth();
const client = await auth.getIdTokenClient(targetAudience);
const res = await client.request({url});
console.log(res.data);
}
main().catch(console.error);A complete example can be found in samples/idtokens-iap.js.
If you’ve secured your IAP app with signed headers, you can use this library to verify the IAP header:
const {OAuth2Client} = require('google-auth-library');
// Expected audience for App Engine.
const expectedAudience = `/projects/your-project-number/apps/your-project-id`;
// IAP issuer
const issuers = ['https://cloud.google.com/iap'];
// Verify the token. OAuth2Client throws an Error if verification fails
const oAuth2Client = new OAuth2Client();
const response = await oAuth2Client.getIapCerts();
const ticket = await oAuth2Client.verifySignedJwtWithCertsAsync(
idToken,
response.pubkeys,
expectedAudience,
issuers
);
// Print out the info contained in the IAP ID token
console.log(ticket)A complete example can be found in samples/verifyIdToken-iap.js.
See CONTRIBUTING.
This plugin intends to support linting of ES2015+ (ES6+) import/export syntax, and prevent issues with misspelling of file paths and import names. All the goodness that the ES2015+ static module syntax intends to provide, marked up in your editor.
IF YOU ARE USING THIS WITH SUBLIME: see the bottom section for important info.
no-unresolved)named)default)namespace)no-restricted-paths)no-absolute-path)require() calls with expressions (no-dynamic-require)no-internal-modules)no-webpack-loader-syntax)no-self-import)no-cycle)no-useless-path-segments)no-relative-parent-imports)export)no-named-as-default)no-named-as-default-member)@deprecated documentation tag (no-deprecated)no-extraneous-dependencies)var or let. (no-mutable-exports)no-unused-modules)script vs. module) (unambiguous)require calls and module.exports or exports.*. (no-commonjs)require and define calls. (no-amd)no-nodejs-modules)first)exports-last)no-duplicates)*) imports (no-namespace)extensions)order)newline-after-import)prefer-default-export)max-dependencies)no-unassigned-import)no-named-default)no-default-export)no-named-export)no-anonymous-default-export)group-exports)dynamic-import-chunkname)eslint-plugin-import for enterpriseAvailable as part of the Tidelift Subscription.
The maintainers of eslint-plugin-import and thousands of other packages are working with Tidelift to deliver commercial support and maintenance for the open source dependencies you use to build your applications. Save time, reduce risk, and improve code health, while paying the maintainers of the exact dependencies you use. Learn more.
or if you manage ESLint as a dev dependency:
All rules are off by default. However, you may configure them manually in your .eslintrc.(yml|json|js), or extend one of the canned configs:
---
extends:
- eslint:recommended
- plugin:import/errors
- plugin:import/warnings
# or configure manually:
plugins:
- import
rules:
import/no-unresolved: [2, {commonjs: true, amd: true}]
import/named: 2
import/namespace: 2
import/default: 2
import/export: 2
# etc...You may use the following shortcut or assemble your own config using the granular settings described below.
Make sure you have installed @typescript-eslint/parser which is used in the following configuration. Unfortunately NPM does not allow to list optional peer dependencies.
extends:
- eslint:recommended
- plugin:import/errors
- plugin:import/warnings
- plugin:import/typescript # this line does the trickWith the advent of module bundlers and the current state of modules and module syntax specs, it’s not always obvious where import x from 'module' should look to find the file behind module.
Up through v0.10ish, this plugin has directly used substack’s resolve plugin, which implements Node’s import behavior. This works pretty well in most cases.
However, webpack allows a number of things in import module source strings that Node does not, such as loaders (import 'file!./whatever') and a number of aliasing schemes, such as externals: mapping a module id to a global name at runtime (allowing some modules to be included more traditionally via script tags).
In the interest of supporting both of these, v0.11 introduces resolvers.
Currently Node and webpack resolution have been implemented, but the resolvers are just npm packages, so third party packages are supported (and encouraged!).
You can reference resolvers in several ways (in order of precedence):
eslint-import-resolver name, like eslint-import-resolver-foo:// .eslintrc.js
module.exports = {
settings: {
'import/resolver': {
foo: { someConfig: value }
}
}
}my-awesome-npm-module:// .eslintrc.js
module.exports = {
settings: {
'import/resolver': {
'my-awesome-npm-module': { someConfig: value }
}
}
}computed property name:// .eslintrc.js
module.exports = {
settings: {
'import/resolver': {
[path.resolve('../../../my-resolver')]: { someConfig: value }
}
}
}Relative paths will be resolved relative to the source’s nearest package.json or the process’s current working directory if no package.json is found.
If you are interesting in writing a resolver, see the spec for more details.
You may set the following settings in your .eslintrc:
import/extensionsA list of file extensions that will be parsed as modules and inspected for exports.
This defaults to ['.js'], unless you are using the react shared config, in which case it is specified as ['.js', '.jsx'].
If you require more granular extension definitions, you can use:
Note that this is different from (and likely a subset of) any import/resolver extensions settings, which may include .json, .coffee, etc. which will still factor into the no-unresolved rule.
Also, the following import/ignore patterns will overrule this list.
import/ignoreA list of regex strings that, if matched by a path, will not report the matching module if no exports are found. In practice, this means rules other than no-unresolved will not report on any imports with (absolute filesystem) paths matching this pattern.
no-unresolved has its own ignore setting.
settings:
import/ignore:
- \.coffee$ # fraught with parse errors
- \.(scss|less|css)$ # can't parse unprocessed CSS modules, eitherimport/core-modulesAn array of additional modules to consider as “core” modules–modules that should be considered resolved but have no path on the filesystem. Your resolver may already define some of these (for example, the Node resolver knows about fs and path), so you need not redefine those.
For example, Electron exposes an electron module:
that would otherwise be unresolved. To avoid this, you may provide electron as a core module:
In Electron’s specific case, there is a shared config named electron that specifies this for you.
Contribution of more such shared configs for other platforms are welcome!
import/external-module-foldersAn array of folders. Resolved modules only from those folders will be considered as “external”. By default - ["node_modules"]. Makes sense if you have configured your path or webpack to handle your internal paths differently and want to consider modules from some folders, for example bower_components or jspm_modules, as “external”.
This option is also useful in a monorepo setup: list here all directories that contain monorepo’s packages and they will be treated as external ones no matter which resolver is used.
Each item in this array is either a folder’s name, its subpath, or its absolute prefix path:
jspm_modules will match any file or folder named jspm_modules or which has a direct or non-direct parent named jspm_modules, e.g. /home/me/project/jspm_modules or /home/me/project/jspm_modules/some-pkg/index.js.
packages/core will match any path that contains these two segments, for example /home/me/project/packages/core/src/utils.js.
/home/me/project/packages will only match files and directories inside this directory, and the directory itself.
Please note that incomplete names are not allowed here so components won’t match bower_components and packages/ui won’t match packages/ui-utils (but will match packages/ui/utils).
import/parsersA map from parsers to file extension arrays. If a file extension is matched, the dependency parser will require and use the map key as the parser instead of the configured ESLint parser. This is useful if you’re inter-op-ing with TypeScript directly using webpack, for example:
In this case, @typescript-eslint/parser must be installed and require-able from the running eslint module’s location (i.e., install it as a peer of ESLint).
This is currently only tested with @typescript-eslint/parser (and its predecessor, typescript-eslint-parser) but should theoretically work with any moderately ESTree-compliant parser.
It’s difficult to say how well various plugin features will be supported, too, depending on how far down the rabbit hole goes. Submit an issue if you find strange behavior beyond here, but steel your heart against the likely outcome of closing with wontfix.
import/resolverSee resolvers.
import/cacheSettings for cache behavior. Memoization is used at various levels to avoid the copious amount of fs.statSync/module parse calls required to correctly report errors.
For normal eslint console runs, the cache lifetime is irrelevant, as we can strongly assume that files should not be changing during the lifetime of the linter process (and thus, the cache in memory)
For long-lasting processes, like eslint_d or eslint-loader, however, it’s important that there be some notion of staleness.
If you never use eslint_d or eslint-loader, you may set the cache lifetime to Infinity and everything should be fine:
Otherwise, set some integer, and cache entries will be evicted after that many seconds have elapsed:
import/internal-regexA regex for packages should be treated as internal. Useful when you are utilizing a monorepo setup or developing a set of packages that depend on each other.
By default, any package referenced from import/external-module-folders will be considered as “external”, including packages in a monorepo like yarn workspace or lerna environment. If you want to mark these packages as “internal” this will be useful.
For example, if your packages in a monorepo are all in @scope, you can configure import/internal-regex like this
SublimeLinter-eslint introduced a change to support .eslintignore files which altered the way file paths are passed to ESLint when linting during editing. This change sends a relative path instead of the absolute path to the file (as ESLint normally provides), which can make it impossible for this plugin to resolve dependencies on the filesystem.
This workaround should no longer be necessary with the release of ESLint 2.0, when .eslintignore will be updated to work more like a .gitignore, which should support proper ignoring of absolute paths via --stdin-filename.
In the meantime, see roadhump/SublimeLinter-eslint#58 for more details and discussion, but essentially, you may find you need to add the following SublimeLinter config to your Sublime project file:
{
"folders":
[
{
"path": "code"
}
],
"SublimeLinter":
{
"linters":
{
"eslint":
{
"chdir": "${project}/code"
}
}
}
}Note that ${project}/code matches the code provided at folders[0].path.
The purpose of the chdir setting, in this case, is to set the working directory from which ESLint is executed to be the same as the directory on which SublimeLinter-eslint bases the relative path it provides.
See the SublimeLinter docs on chdir for more information, in case this does not work with your project.
If you are not using .eslintignore, or don’t have a Sublime project file, you can also do the following via a .sublimelinterrc file in some ancestor directory of your code:
I also found that I needed to set rc_search_limit to null, which removes the file hierarchy search limit when looking up the directory tree for .sublimelinterrc:
In Package Settings / SublimeLinter / User Settings:
I believe this defaults to 3, so you may not need to alter it depending on your project folder max depth.
Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.
Uses the built-in implementation when available.
npm install safe-buffer
The goal of this package is to provide a safe replacement for the node.js Buffer.
It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:
var Buffer = require('safe-buffer').Buffer
// Existing buffer code will continue to work without issues:
new Buffer('hey', 'utf8')
new Buffer([1, 2, 3], 'utf8')
new Buffer(obj)
new Buffer(16) // create an uninitialized buffer (potentially unsafe)
// But you can use these new explicit APIs to make clear what you want:
Buffer.from('hey', 'utf8') // convert from many types to a Buffer
Buffer.alloc(16) // create a zero-filled buffer (safe)
Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe)array {Array}Allocates a new Buffer using an array of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']A TypeError will be thrown if array is not an Array.
arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()byteOffset {Number} Default: 0length {Number} Default: arrayBuffer.length - byteOffsetWhen passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.
buffer {Buffer}Copies the passed buffer data onto a new Buffer instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)A TypeError will be thrown if buffer is not a Buffer.
str {String} String to encode.encoding {String} Encoding to use, Default: 'utf8'Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a téstA TypeError will be thrown if str is not a string.
size {Number}fill {Value} Default: undefinedencoding {String} Default: utf8Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.
The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.
A TypeError will be thrown if size is not a number.
size {Number}Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.
size {Number}Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
const data = socket.read();
// allocate for retained data
const sb = Buffer.allocUnsafeSlow(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.
A TypeError will be thrown if size is not a number.
The rest of the Buffer API is exactly the same as in node.js. See the docs.
Buffer unsafe?Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.
The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.
Because the Buffer constructor is so powerful, you often see code like this:
But what happens if toHex is called with a Number argument?
If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.
When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.
From the node.js docs:
new Buffer(size)
sizeNumberThe underlying memory for
Bufferinstances created in this way is not initialized. The contents of a newly createdBufferare unknown and could contain sensitive data. Usebuf.fill(0)to initialize a Buffer to zeroes.
(Emphasis our own.)
Whenever the programmer intended to create an uninitialized Buffer you often see code like this:
var buf = new Buffer(16)
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.
Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.
Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:
// Take a JSON payload {str: "some string"} and convert it to hex
var server = http.createServer(function (req, res) {
var data = ''
req.setEncoding('utf8')
req.on('data', function (chunk) {
data += chunk
})
req.on('end', function () {
var body = JSON.parse(data)
res.end(new Buffer(body.str).toString('hex'))
})
})
server.listen(8080)In this example, an http client just has to send:
and it will get back 1,000 bytes of uninitialized memory from the server.
This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.
bittorrent-dhtMathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.
Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.
wsThat got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.
If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.
These were the vulnerable methods:
Here’s a vulnerable socket server with some echo functionality:
server.on('connection', function (socket) {
socket.on('message', function (message) {
message = JSON.parse(message)
if (message.type === 'echo') {
socket.send(message.data) // send back the user's message
}
})
})socket.send(number) called on the server, will disclose server memory.
Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.
It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.
But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.
Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.
Buffer.allocUnsafe(number)The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.
var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory!
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}We sent a PR to node.js core (merged as semver-major) which defends against one case:
In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.
But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.
We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.
We believe the best solution is to:
1. Change new Buffer(number) to return safe, zeroed-out memory
2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)
We now support adding three new APIs:
Buffer.from(value) - convert from any type to a bufferBuffer.alloc(size) - create a zero-filled bufferBuffer.allocUnsafe(size) - create an uninitialized buffer with given sizeThis solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.
This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).
This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.
Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.
Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.
wsbittorrent-dhtThe original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.
Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.
Thanks to John Hiesey for proofreading this README and auditing the code.
Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.
Uses the built-in implementation when available.
npm install safe-buffer
The goal of this package is to provide a safe replacement for the node.js Buffer.
It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:
var Buffer = require('safe-buffer').Buffer
// Existing buffer code will continue to work without issues:
new Buffer('hey', 'utf8')
new Buffer([1, 2, 3], 'utf8')
new Buffer(obj)
new Buffer(16) // create an uninitialized buffer (potentially unsafe)
// But you can use these new explicit APIs to make clear what you want:
Buffer.from('hey', 'utf8') // convert from many types to a Buffer
Buffer.alloc(16) // create a zero-filled buffer (safe)
Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe)array {Array}Allocates a new Buffer using an array of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']A TypeError will be thrown if array is not an Array.
arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()byteOffset {Number} Default: 0length {Number} Default: arrayBuffer.length - byteOffsetWhen passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.
buffer {Buffer}Copies the passed buffer data onto a new Buffer instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)A TypeError will be thrown if buffer is not a Buffer.
str {String} String to encode.encoding {String} Encoding to use, Default: 'utf8'Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a téstA TypeError will be thrown if str is not a string.
size {Number}fill {Value} Default: undefinedencoding {String} Default: utf8Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.
The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.
A TypeError will be thrown if size is not a number.
size {Number}Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.
size {Number}Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
const data = socket.read();
// allocate for retained data
const sb = Buffer.allocUnsafeSlow(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.
A TypeError will be thrown if size is not a number.
The rest of the Buffer API is exactly the same as in node.js. See the docs.
Buffer unsafe?Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.
The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.
Because the Buffer constructor is so powerful, you often see code like this:
But what happens if toHex is called with a Number argument?
If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.
When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.
From the node.js docs:
new Buffer(size)
sizeNumberThe underlying memory for
Bufferinstances created in this way is not initialized. The contents of a newly createdBufferare unknown and could contain sensitive data. Usebuf.fill(0)to initialize a Buffer to zeroes.
(Emphasis our own.)
Whenever the programmer intended to create an uninitialized Buffer you often see code like this:
var buf = new Buffer(16)
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.
Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.
Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:
// Take a JSON payload {str: "some string"} and convert it to hex
var server = http.createServer(function (req, res) {
var data = ''
req.setEncoding('utf8')
req.on('data', function (chunk) {
data += chunk
})
req.on('end', function () {
var body = JSON.parse(data)
res.end(new Buffer(body.str).toString('hex'))
})
})
server.listen(8080)In this example, an http client just has to send:
and it will get back 1,000 bytes of uninitialized memory from the server.
This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.
bittorrent-dhtMathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.
Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.
wsThat got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.
If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.
These were the vulnerable methods:
Here’s a vulnerable socket server with some echo functionality:
server.on('connection', function (socket) {
socket.on('message', function (message) {
message = JSON.parse(message)
if (message.type === 'echo') {
socket.send(message.data) // send back the user's message
}
})
})socket.send(number) called on the server, will disclose server memory.
Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.
It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.
But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.
Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.
Buffer.allocUnsafe(number)The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.
var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory!
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}We sent a PR to node.js core (merged as semver-major) which defends against one case:
In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.
But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.
We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.
We believe the best solution is to:
1. Change new Buffer(number) to return safe, zeroed-out memory
2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)
We now support adding three new APIs:
Buffer.from(value) - convert from any type to a bufferBuffer.alloc(size) - create a zero-filled bufferBuffer.allocUnsafe(size) - create an uninitialized buffer with given sizeThis solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.
This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).
This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.
Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.
Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.
wsbittorrent-dhtThe original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.
Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.
Thanks to John Hiesey for proofreading this README and auditing the code.
Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.
Uses the built-in implementation when available.
npm install safe-buffer
The goal of this package is to provide a safe replacement for the node.js Buffer.
It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:
var Buffer = require('safe-buffer').Buffer
// Existing buffer code will continue to work without issues:
new Buffer('hey', 'utf8')
new Buffer([1, 2, 3], 'utf8')
new Buffer(obj)
new Buffer(16) // create an uninitialized buffer (potentially unsafe)
// But you can use these new explicit APIs to make clear what you want:
Buffer.from('hey', 'utf8') // convert from many types to a Buffer
Buffer.alloc(16) // create a zero-filled buffer (safe)
Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe)array {Array}Allocates a new Buffer using an array of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']A TypeError will be thrown if array is not an Array.
arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()byteOffset {Number} Default: 0length {Number} Default: arrayBuffer.length - byteOffsetWhen passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.
buffer {Buffer}Copies the passed buffer data onto a new Buffer instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)A TypeError will be thrown if buffer is not a Buffer.
str {String} String to encode.encoding {String} Encoding to use, Default: 'utf8'Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a téstA TypeError will be thrown if str is not a string.
size {Number}fill {Value} Default: undefinedencoding {String} Default: utf8Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.
The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.
A TypeError will be thrown if size is not a number.
size {Number}Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.
size {Number}Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
const data = socket.read();
// allocate for retained data
const sb = Buffer.allocUnsafeSlow(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.
A TypeError will be thrown if size is not a number.
The rest of the Buffer API is exactly the same as in node.js. See the docs.
Buffer unsafe?Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.
The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.
Because the Buffer constructor is so powerful, you often see code like this:
But what happens if toHex is called with a Number argument?
If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.
When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.
From the node.js docs:
new Buffer(size)
sizeNumberThe underlying memory for
Bufferinstances created in this way is not initialized. The contents of a newly createdBufferare unknown and could contain sensitive data. Usebuf.fill(0)to initialize a Buffer to zeroes.
(Emphasis our own.)
Whenever the programmer intended to create an uninitialized Buffer you often see code like this:
var buf = new Buffer(16)
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.
Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.
Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:
// Take a JSON payload {str: "some string"} and convert it to hex
var server = http.createServer(function (req, res) {
var data = ''
req.setEncoding('utf8')
req.on('data', function (chunk) {
data += chunk
})
req.on('end', function () {
var body = JSON.parse(data)
res.end(new Buffer(body.str).toString('hex'))
})
})
server.listen(8080)In this example, an http client just has to send:
and it will get back 1,000 bytes of uninitialized memory from the server.
This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.
bittorrent-dhtMathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.
Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.
wsThat got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.
If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.
These were the vulnerable methods:
Here’s a vulnerable socket server with some echo functionality:
server.on('connection', function (socket) {
socket.on('message', function (message) {
message = JSON.parse(message)
if (message.type === 'echo') {
socket.send(message.data) // send back the user's message
}
})
})socket.send(number) called on the server, will disclose server memory.
Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.
It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.
But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.
Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.
Buffer.allocUnsafe(number)The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.
var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory!
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}We sent a PR to node.js core (merged as semver-major) which defends against one case:
In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.
But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.
We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.
We believe the best solution is to:
1. Change new Buffer(number) to return safe, zeroed-out memory
2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)
We now support adding three new APIs:
Buffer.from(value) - convert from any type to a bufferBuffer.alloc(size) - create a zero-filled bufferBuffer.allocUnsafe(size) - create an uninitialized buffer with given sizeThis solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.
This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).
This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.
Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.
Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.
wsbittorrent-dhtThe original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.
Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.
Thanks to John Hiesey for proofreading this README and auditing the code.
Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.
Uses the built-in implementation when available.
npm install safe-buffer
The goal of this package is to provide a safe replacement for the node.js Buffer.
It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:
var Buffer = require('safe-buffer').Buffer
// Existing buffer code will continue to work without issues:
new Buffer('hey', 'utf8')
new Buffer([1, 2, 3], 'utf8')
new Buffer(obj)
new Buffer(16) // create an uninitialized buffer (potentially unsafe)
// But you can use these new explicit APIs to make clear what you want:
Buffer.from('hey', 'utf8') // convert from many types to a Buffer
Buffer.alloc(16) // create a zero-filled buffer (safe)
Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe)array {Array}Allocates a new Buffer using an array of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']A TypeError will be thrown if array is not an Array.
arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()byteOffset {Number} Default: 0length {Number} Default: arrayBuffer.length - byteOffsetWhen passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.
buffer {Buffer}Copies the passed buffer data onto a new Buffer instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)A TypeError will be thrown if buffer is not a Buffer.
str {String} String to encode.encoding {String} Encoding to use, Default: 'utf8'Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a téstA TypeError will be thrown if str is not a string.
size {Number}fill {Value} Default: undefinedencoding {String} Default: utf8Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.
The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.
A TypeError will be thrown if size is not a number.
size {Number}Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.
size {Number}Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
const data = socket.read();
// allocate for retained data
const sb = Buffer.allocUnsafeSlow(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.
A TypeError will be thrown if size is not a number.
The rest of the Buffer API is exactly the same as in node.js. See the docs.
Buffer unsafe?Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.
The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.
Because the Buffer constructor is so powerful, you often see code like this:
But what happens if toHex is called with a Number argument?
If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.
When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.
From the node.js docs:
new Buffer(size)
sizeNumberThe underlying memory for
Bufferinstances created in this way is not initialized. The contents of a newly createdBufferare unknown and could contain sensitive data. Usebuf.fill(0)to initialize a Buffer to zeroes.
(Emphasis our own.)
Whenever the programmer intended to create an uninitialized Buffer you often see code like this:
var buf = new Buffer(16)
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.
Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.
Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:
// Take a JSON payload {str: "some string"} and convert it to hex
var server = http.createServer(function (req, res) {
var data = ''
req.setEncoding('utf8')
req.on('data', function (chunk) {
data += chunk
})
req.on('end', function () {
var body = JSON.parse(data)
res.end(new Buffer(body.str).toString('hex'))
})
})
server.listen(8080)In this example, an http client just has to send:
and it will get back 1,000 bytes of uninitialized memory from the server.
This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.
bittorrent-dhtMathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.
Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.
wsThat got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.
If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.
These were the vulnerable methods:
Here’s a vulnerable socket server with some echo functionality:
server.on('connection', function (socket) {
socket.on('message', function (message) {
message = JSON.parse(message)
if (message.type === 'echo') {
socket.send(message.data) // send back the user's message
}
})
})socket.send(number) called on the server, will disclose server memory.
Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.
It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.
But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.
Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.
Buffer.allocUnsafe(number)The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.
var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory!
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}We sent a PR to node.js core (merged as semver-major) which defends against one case:
In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.
But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.
We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.
We believe the best solution is to:
1. Change new Buffer(number) to return safe, zeroed-out memory
2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)
We now support adding three new APIs:
Buffer.from(value) - convert from any type to a bufferBuffer.alloc(size) - create a zero-filled bufferBuffer.allocUnsafe(size) - create an uninitialized buffer with given sizeThis solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.
This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).
This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.
Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.
Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.
wsbittorrent-dhtThe original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.
Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.
Thanks to John Hiesey for proofreading this README and auditing the code.
Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.
Uses the built-in implementation when available.
npm install safe-buffer
The goal of this package is to provide a safe replacement for the node.js Buffer.
It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:
var Buffer = require('safe-buffer').Buffer
// Existing buffer code will continue to work without issues:
new Buffer('hey', 'utf8')
new Buffer([1, 2, 3], 'utf8')
new Buffer(obj)
new Buffer(16) // create an uninitialized buffer (potentially unsafe)
// But you can use these new explicit APIs to make clear what you want:
Buffer.from('hey', 'utf8') // convert from many types to a Buffer
Buffer.alloc(16) // create a zero-filled buffer (safe)
Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe)array {Array}Allocates a new Buffer using an array of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']A TypeError will be thrown if array is not an Array.
arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()byteOffset {Number} Default: 0length {Number} Default: arrayBuffer.length - byteOffsetWhen passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.
buffer {Buffer}Copies the passed buffer data onto a new Buffer instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)A TypeError will be thrown if buffer is not a Buffer.
str {String} String to encode.encoding {String} Encoding to use, Default: 'utf8'Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a téstA TypeError will be thrown if str is not a string.
size {Number}fill {Value} Default: undefinedencoding {String} Default: utf8Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.
The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.
A TypeError will be thrown if size is not a number.
size {Number}Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.
size {Number}Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
const data = socket.read();
// allocate for retained data
const sb = Buffer.allocUnsafeSlow(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.
A TypeError will be thrown if size is not a number.
The rest of the Buffer API is exactly the same as in node.js. See the docs.
Buffer unsafe?Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.
The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.
Because the Buffer constructor is so powerful, you often see code like this:
But what happens if toHex is called with a Number argument?
If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.
When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.
From the node.js docs:
new Buffer(size)
sizeNumberThe underlying memory for
Bufferinstances created in this way is not initialized. The contents of a newly createdBufferare unknown and could contain sensitive data. Usebuf.fill(0)to initialize a Buffer to zeroes.
(Emphasis our own.)
Whenever the programmer intended to create an uninitialized Buffer you often see code like this:
var buf = new Buffer(16)
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.
Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.
Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:
// Take a JSON payload {str: "some string"} and convert it to hex
var server = http.createServer(function (req, res) {
var data = ''
req.setEncoding('utf8')
req.on('data', function (chunk) {
data += chunk
})
req.on('end', function () {
var body = JSON.parse(data)
res.end(new Buffer(body.str).toString('hex'))
})
})
server.listen(8080)In this example, an http client just has to send:
and it will get back 1,000 bytes of uninitialized memory from the server.
This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.
bittorrent-dhtMathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.
Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.
wsThat got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.
If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.
These were the vulnerable methods:
Here’s a vulnerable socket server with some echo functionality:
server.on('connection', function (socket) {
socket.on('message', function (message) {
message = JSON.parse(message)
if (message.type === 'echo') {
socket.send(message.data) // send back the user's message
}
})
})socket.send(number) called on the server, will disclose server memory.
Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.
It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.
But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.
Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.
Buffer.allocUnsafe(number)The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.
var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory!
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}We sent a PR to node.js core (merged as semver-major) which defends against one case:
In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.
But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.
We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.
We believe the best solution is to:
1. Change new Buffer(number) to return safe, zeroed-out memory
2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)
We now support adding three new APIs:
Buffer.from(value) - convert from any type to a bufferBuffer.alloc(size) - create a zero-filled bufferBuffer.allocUnsafe(size) - create an uninitialized buffer with given sizeThis solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.
This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).
This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.
Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.
Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.
wsbittorrent-dhtThe original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.
Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.
Thanks to John Hiesey for proofreading this README and auditing the code.
Use the new Node.js Buffer APIs (Buffer.from, Buffer.alloc, Buffer.allocUnsafe, Buffer.allocUnsafeSlow) in all versions of Node.js.
Uses the built-in implementation when available.
npm install safe-buffer
The goal of this package is to provide a safe replacement for the node.js Buffer.
It’s a drop-in replacement for Buffer. You can use it by adding one require line to the top of your node.js modules:
var Buffer = require('safe-buffer').Buffer
// Existing buffer code will continue to work without issues:
new Buffer('hey', 'utf8')
new Buffer([1, 2, 3], 'utf8')
new Buffer(obj)
new Buffer(16) // create an uninitialized buffer (potentially unsafe)
// But you can use these new explicit APIs to make clear what you want:
Buffer.from('hey', 'utf8') // convert from many types to a Buffer
Buffer.alloc(16) // create a zero-filled buffer (safe)
Buffer.allocUnsafe(16) // create an uninitialized buffer (potentially unsafe)array {Array}Allocates a new Buffer using an array of octets.
const buf = Buffer.from([0x62,0x75,0x66,0x66,0x65,0x72]);
// creates a new Buffer containing ASCII bytes
// ['b','u','f','f','e','r']A TypeError will be thrown if array is not an Array.
arrayBuffer {ArrayBuffer} The .buffer property of a TypedArray or a new ArrayBuffer()byteOffset {Number} Default: 0length {Number} Default: arrayBuffer.length - byteOffsetWhen passed a reference to the .buffer property of a TypedArray instance, the newly created Buffer will share the same allocated memory as the TypedArray.
const arr = new Uint16Array(2);
arr[0] = 5000;
arr[1] = 4000;
const buf = Buffer.from(arr.buffer); // shares the memory with arr;
console.log(buf);
// Prints: <Buffer 88 13 a0 0f>
// changing the TypedArray changes the Buffer also
arr[1] = 6000;
console.log(buf);
// Prints: <Buffer 88 13 70 17>The optional byteOffset and length arguments specify a memory range within the arrayBuffer that will be shared by the Buffer.
const ab = new ArrayBuffer(10);
const buf = Buffer.from(ab, 0, 2);
console.log(buf.length);
// Prints: 2A TypeError will be thrown if arrayBuffer is not an ArrayBuffer.
buffer {Buffer}Copies the passed buffer data onto a new Buffer instance.
const buf1 = Buffer.from('buffer');
const buf2 = Buffer.from(buf1);
buf1[0] = 0x61;
console.log(buf1.toString());
// 'auffer'
console.log(buf2.toString());
// 'buffer' (copy is not changed)A TypeError will be thrown if buffer is not a Buffer.
str {String} String to encode.encoding {String} Encoding to use, Default: 'utf8'Creates a new Buffer containing the given JavaScript string str. If provided, the encoding parameter identifies the character encoding. If not provided, encoding defaults to 'utf8'.
const buf1 = Buffer.from('this is a tést');
console.log(buf1.toString());
// prints: this is a tést
console.log(buf1.toString('ascii'));
// prints: this is a tC)st
const buf2 = Buffer.from('7468697320697320612074c3a97374', 'hex');
console.log(buf2.toString());
// prints: this is a téstA TypeError will be thrown if str is not a string.
size {Number}fill {Value} Default: undefinedencoding {String} Default: utf8Allocates a new Buffer of size bytes. If fill is undefined, the Buffer will be zero-filled.
The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
If fill is specified, the allocated Buffer will be initialized by calling buf.fill(fill). See [buf.fill()] for more information.
If both fill and encoding are specified, the allocated Buffer will be initialized by calling buf.fill(fill, encoding). For example:
const buf = Buffer.alloc(11, 'aGVsbG8gd29ybGQ=', 'base64');
console.log(buf);
// <Buffer 68 65 6c 6c 6f 20 77 6f 72 6c 64>Calling Buffer.alloc(size) can be significantly slower than the alternative Buffer.allocUnsafe(size) but ensures that the newly created Buffer instance contents will never contain sensitive data.
A TypeError will be thrown if size is not a number.
size {Number}Allocates a new non-zero-filled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
const buf = Buffer.allocUnsafe(5);
console.log(buf);
// <Buffer 78 e0 82 02 01>
// (octets will be different, every time)
buf.fill(0);
console.log(buf);
// <Buffer 00 00 00 00 00>A TypeError will be thrown if size is not a number.
Note that the Buffer module pre-allocates an internal Buffer instance of size Buffer.poolSize that is used as a pool for the fast allocation of new Buffer instances created using Buffer.allocUnsafe(size) (and the deprecated new Buffer(size) constructor) only when size is less than or equal to Buffer.poolSize >> 1 (floor of Buffer.poolSize divided by two). The default value of Buffer.poolSize is 8192 but can be modified.
Use of this pre-allocated internal memory pool is a key difference between calling Buffer.alloc(size, fill) vs. Buffer.allocUnsafe(size).fill(fill). Specifically, Buffer.alloc(size, fill) will never use the internal Buffer pool, while Buffer.allocUnsafe(size).fill(fill) will use the internal Buffer pool if size is less than or equal to half Buffer.poolSize. The difference is subtle but can be important when an application requires the additional performance that Buffer.allocUnsafe(size) provides.
size {Number}Allocates a new non-zero-filled and non-pooled Buffer of size bytes. The size must be less than or equal to the value of require('buffer').kMaxLength (on 64-bit architectures, kMaxLength is (2^31)-1). Otherwise, a [RangeError] is thrown. A zero-length Buffer will be created if a size less than or equal to 0 is specified.
The underlying memory for Buffer instances created in this way is not initialized. The contents of the newly created Buffer are unknown and may contain sensitive data. Use [buf.fill(0)] to initialize such Buffer instances to zeroes.
When using Buffer.allocUnsafe() to allocate new Buffer instances, allocations under 4KB are, by default, sliced from a single pre-allocated Buffer. This allows applications to avoid the garbage collection overhead of creating many individually allocated Buffers. This approach improves both performance and memory usage by eliminating the need to track and cleanup as many Persistent objects.
However, in the case where a developer may need to retain a small chunk of memory from a pool for an indeterminate amount of time, it may be appropriate to create an un-pooled Buffer instance using Buffer.allocUnsafeSlow() then copy out the relevant bits.
// need to keep around a few small chunks of memory
const store = [];
socket.on('readable', () => {
const data = socket.read();
// allocate for retained data
const sb = Buffer.allocUnsafeSlow(10);
// copy the data into the new allocation
data.copy(sb, 0, 0, 10);
store.push(sb);
});Use of Buffer.allocUnsafeSlow() should be used only as a last resort after a developer has observed undue memory retention in their applications.
A TypeError will be thrown if size is not a number.
The rest of the Buffer API is exactly the same as in node.js. See the docs.
Buffer unsafe?Today, the node.js Buffer constructor is overloaded to handle many different argument types like String, Array, Object, TypedArrayView (Uint8Array, etc.), ArrayBuffer, and also Number.
The API is optimized for convenience: you can throw any type at it, and it will try to do what you want.
Because the Buffer constructor is so powerful, you often see code like this:
But what happens if toHex is called with a Number argument?
If an attacker can make your program call the Buffer constructor with a Number argument, then they can make it allocate uninitialized memory from the node.js process. This could potentially disclose TLS private keys, user data, or database passwords.
When the Buffer constructor is passed a Number argument, it returns an UNINITIALIZED block of memory of the specified size. When you create a Buffer like this, you MUST overwrite the contents before returning it to the user.
From the node.js docs:
new Buffer(size)
sizeNumberThe underlying memory for
Bufferinstances created in this way is not initialized. The contents of a newly createdBufferare unknown and could contain sensitive data. Usebuf.fill(0)to initialize a Buffer to zeroes.
(Emphasis our own.)
Whenever the programmer intended to create an uninitialized Buffer you often see code like this:
var buf = new Buffer(16)
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}Yes. It’s surprisingly common to forget to check the type of your variables in a dynamically-typed language like JavaScript.
Usually the consequences of assuming the wrong type is that your program crashes with an uncaught exception. But the failure mode for forgetting to check the type of arguments to the Buffer constructor is more catastrophic.
Here’s an example of a vulnerable service that takes a JSON payload and converts it to hex:
// Take a JSON payload {str: "some string"} and convert it to hex
var server = http.createServer(function (req, res) {
var data = ''
req.setEncoding('utf8')
req.on('data', function (chunk) {
data += chunk
})
req.on('end', function () {
var body = JSON.parse(data)
res.end(new Buffer(body.str).toString('hex'))
})
})
server.listen(8080)In this example, an http client just has to send:
and it will get back 1,000 bytes of uninitialized memory from the server.
This is a very serious bug. It’s similar in severity to the the Heartbleed bug that allowed disclosure of OpenSSL process memory by remote attackers.
bittorrent-dhtMathias Buus and I (Feross Aboukhadijeh) found this issue in one of our own packages, bittorrent-dht. The bug would allow anyone on the internet to send a series of messages to a user of bittorrent-dht and get them to reveal 20 bytes at a time of uninitialized memory from the node.js process.
Here’s the commit that fixed it. We released a new fixed version, created a Node Security Project disclosure, and deprecated all vulnerable versions on npm so users will get a warning to upgrade to a newer version.
wsThat got us wondering if there were other vulnerable packages. Sure enough, within a short period of time, we found the same issue in ws, the most popular WebSocket implementation in node.js.
If certain APIs were called with Number parameters instead of String or Buffer as expected, then uninitialized server memory would be disclosed to the remote peer.
These were the vulnerable methods:
Here’s a vulnerable socket server with some echo functionality:
server.on('connection', function (socket) {
socket.on('message', function (message) {
message = JSON.parse(message)
if (message.type === 'echo') {
socket.send(message.data) // send back the user's message
}
})
})socket.send(number) called on the server, will disclose server memory.
Here’s the release where the issue was fixed, with a more detailed explanation. Props to Arnout Kazemier for the quick fix. Here’s the Node Security Project disclosure.
It’s important that node.js offers a fast way to get memory otherwise performance-critical applications would needlessly get a lot slower.
But we need a better way to signal our intent as programmers. When we want uninitialized memory, we should request it explicitly.
Sensitive functionality should not be packed into a developer-friendly API that loosely accepts many different types. This type of API encourages the lazy practice of passing variables in without checking the type very carefully.
Buffer.allocUnsafe(number)The functionality of creating buffers with uninitialized memory should be part of another API. We propose Buffer.allocUnsafe(number). This way, it’s not part of an API that frequently gets user input of all sorts of different types passed into it.
var buf = Buffer.allocUnsafe(16) // careful, uninitialized memory!
// Immediately overwrite the uninitialized buffer with data from another buffer
for (var i = 0; i < buf.length; i++) {
buf[i] = otherBuf[i]
}We sent a PR to node.js core (merged as semver-major) which defends against one case:
In this situation, it’s implied that the programmer intended the first argument to be a string, since they passed an encoding as a second argument. Today, node.js will allocate uninitialized memory in the case of new Buffer(number, encoding), which is probably not what the programmer intended.
But this is only a partial solution, since if the programmer does new Buffer(variable) (without an encoding parameter) there’s no way to know what they intended. If variable is sometimes a number, then uninitialized memory will sometimes be returned.
We could deprecate and remove new Buffer(number) and use Buffer.allocUnsafe(number) when we need uninitialized memory. But that would break 1000s of packages.
We believe the best solution is to:
1. Change new Buffer(number) to return safe, zeroed-out memory
2. Create a new API for creating uninitialized Buffers. We propose: Buffer.allocUnsafe(number)
We now support adding three new APIs:
Buffer.from(value) - convert from any type to a bufferBuffer.alloc(size) - create a zero-filled bufferBuffer.allocUnsafe(size) - create an uninitialized buffer with given sizeThis solves the core problem that affected ws and bittorrent-dht which is Buffer(variable) getting tricked into taking a number argument.
This way, existing code continues working and the impact on the npm ecosystem will be minimal. Over time, npm maintainers can migrate performance-critical code to use Buffer.allocUnsafe(number) instead of new Buffer(number).
This wasn’t merely a theoretical exercise because we found the issue in some of the most popular npm packages.
Fortunately, there’s an easy fix that can be applied today. Use safe-buffer in place of buffer.
Eventually, we hope that node.js core can switch to this new, safer behavior. We believe the impact on the ecosystem would be minimal since it’s not a breaking change. Well-maintained, popular packages would be updated to use Buffer.alloc quickly, while older, insecure packages would magically become safe from this attack vector.
wsbittorrent-dhtThe original issues in bittorrent-dht (disclosure) and ws (disclosure) were discovered by Mathias Buus and Feross Aboukhadijeh.
Thanks to Adam Baldwin for helping disclose these issues and for his work running the Node Security Project.
Thanks to John Hiesey for proofreading this README and auditing the code.
A light-weight module that brings window.fetch to Node.js
(We are looking for v2 maintainers and collaborators)
Instead of implementing XMLHttpRequest in Node.js to run browser-specific Fetch polyfill, why not go from native http to fetch API directly? Hence, node-fetch, minimal code for a window.fetch compatible API on Node.js runtime.
See Matt Andrews’ isomorphic-fetch or Leonardo Quixada’s cross-fetch for isomorphic usage (exports node-fetch for server-side, whatwg-fetch for client-side).
window.fetch API.res.text() and res.json()) to UTF-8 automatically.window.fetch offers, feel free to open an issue.Current stable release (2.x)
We suggest you load the module via require until the stabilization of ES modules in node:
If you are using a Promise library other than native, set it through fetch.Promise:
NOTE: The documentation below is up-to-date with 2.x releases; see the 1.x readme, changelog and 2.x upgrade guide for the differences.
fetch('https://api.github.com/users/github')
.then(res => res.json())
.then(json => console.log(json));fetch('https://httpbin.org/post', { method: 'POST', body: 'a=1' })
.then(res => res.json()) // expecting a json response
.then(json => console.log(json));const body = { a: 1 };
fetch('https://httpbin.org/post', {
method: 'post',
body: JSON.stringify(body),
headers: { 'Content-Type': 'application/json' },
})
.then(res => res.json())
.then(json => console.log(json));URLSearchParams is available in Node.js as of v7.5.0. See official documentation for more usage methods.
NOTE: The Content-Type header is only set automatically to x-www-form-urlencoded when an instance of URLSearchParams is given as such:
const { URLSearchParams } = require('url');
const params = new URLSearchParams();
params.append('a', 1);
fetch('https://httpbin.org/post', { method: 'POST', body: params })
.then(res => res.json())
.then(json => console.log(json));NOTE: 3xx-5xx responses are NOT exceptions and should be handled in then(); see the next section for more information.
Adding a catch to the fetch promise chain will catch all exceptions, such as errors originating from node core libraries, network errors and operational errors, which are instances of FetchError. See the error handling document for more details.
It is common to create a helper function to check that the response contains no client (4xx) or server (5xx) error responses:
function checkStatus(res) {
if (res.ok) { // res.status >= 200 && res.status < 300
return res;
} else {
throw MyCustomError(res.statusText);
}
}
fetch('https://httpbin.org/status/400')
.then(checkStatus)
.then(res => console.log('will not get here...'))The “Node.js way” is to use streams when possible:
fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png')
.then(res => {
const dest = fs.createWriteStream('./octocat.png');
res.body.pipe(dest);
});If you prefer to cache binary data in full, use buffer(). (NOTE: buffer() is a node-fetch-only API)
const fileType = require('file-type');
fetch('https://assets-cdn.github.com/images/modules/logos_page/Octocat.png')
.then(res => res.buffer())
.then(buffer => fileType(buffer))
.then(type => { /* ... */ });fetch('https://github.com/')
.then(res => {
console.log(res.ok);
console.log(res.status);
console.log(res.statusText);
console.log(res.headers.raw());
console.log(res.headers.get('content-type'));
});Unlike browsers, you can access raw Set-Cookie headers manually using Headers.raw(). This is a node-fetch only API.
fetch(url).then(res => {
// returns an array of values, instead of a string of comma-separated values
console.log(res.headers.raw()['set-cookie']);
});const { createReadStream } = require('fs');
const stream = createReadStream('input.txt');
fetch('https://httpbin.org/post', { method: 'POST', body: stream })
.then(res => res.json())
.then(json => console.log(json));const FormData = require('form-data');
const form = new FormData();
form.append('a', 1);
fetch('https://httpbin.org/post', { method: 'POST', body: form })
.then(res => res.json())
.then(json => console.log(json));
// OR, using custom headers
// NOTE: getHeaders() is non-standard API
const form = new FormData();
form.append('a', 1);
const options = {
method: 'POST',
body: form,
headers: form.getHeaders()
}
fetch('https://httpbin.org/post', options)
.then(res => res.json())
.then(json => console.log(json));NOTE: You may cancel streamed requests only on Node >= v8.0.0
You may cancel requests with AbortController. A suggested implementation is abort-controller.
An example of timing out a request after 150ms could be achieved as the following:
import AbortController from 'abort-controller';
const controller = new AbortController();
const timeout = setTimeout(
() => { controller.abort(); },
150,
);
fetch(url, { signal: controller.signal })
.then(res => res.json())
.then(
data => {
useData(data)
},
err => {
if (err.name === 'AbortError') {
// request was aborted
}
},
)
.finally(() => {
clearTimeout(timeout);
});See test cases for more examples.
url A string representing the URL for fetchingoptions Options for the HTTP(S) requestPromise<Response>Perform an HTTP(S) fetch.
url should be an absolute url, such as https://example.com/. A path-relative URL (/file/under/root) or protocol-relative URL (//can-be-http-or-https.com/) will result in a rejected Promise.
The default values are shown after each option key.
{
// These properties are part of the Fetch Standard
method: 'GET',
headers: {}, // request headers. format is the identical to that accepted by the Headers constructor (see below)
body: null, // request body. can be null, a string, a Buffer, a Blob, or a Node.js Readable stream
redirect: 'follow', // set to `manual` to extract redirect headers, `error` to reject redirect
signal: null, // pass an instance of AbortSignal to optionally abort requests
// The following properties are node-fetch extensions
follow: 20, // maximum redirect count. 0 to not follow redirect
timeout: 0, // req/res timeout in ms, it resets on redirect. 0 to disable (OS limit applies). Signal is recommended instead.
compress: true, // support gzip/deflate content encoding. false to disable
size: 0, // maximum response body size in bytes. 0 to disable
agent: null // http(s).Agent instance or function that returns an instance (see below)
}If no values are set, the following request headers will be sent automatically:
| Header | Value |
|---|---|
Accept-Encoding |
gzip,deflate (when options.compress === true) |
Accept |
*/* |
Connection |
close (when no options.agent is present) |
Content-Length |
(automatically calculated, if possible) |
Transfer-Encoding |
chunked (when req.body is a stream) |
User-Agent |
node-fetch/1.0 (+https://github.com/bitinn/node-fetch) |
Note: when body is a Stream, Content-Length is not set automatically.
The agent option allows you to specify networking related options which are out of the scope of Fetch, including and not limited to the following:
See http.Agent for more information.
In addition, the agent option accepts a function that returns http(s).Agent instance given current URL, this is useful during a redirection chain across HTTP and HTTPS protocol.
const httpAgent = new http.Agent({
keepAlive: true
});
const httpsAgent = new https.Agent({
keepAlive: true
});
const options = {
agent: function (_parsedURL) {
if (_parsedURL.protocol == 'http:') {
return httpAgent;
} else {
return httpsAgent;
}
}
}An HTTP(S) request containing information about URL, method, headers, and the body. This class implements the Body interface.
Due to the nature of Node.js, the following properties are not implemented at this moment:
typedestinationreferrerreferrerPolicymodecredentialscacheintegritykeepaliveThe following node-fetch extension properties are provided:
followcompresscounteragentSee options for exact meaning of these extensions.
(spec-compliant)
input A string representing a URL, or another Request (which will be cloned)options [Options][#fetch-options] for the HTTP(S) requestConstructs a new Request object. The constructor is identical to that in the browser.
In most cases, directly fetch(url, options) is simpler than creating a Request object.
An HTTP(S) response. This class implements the Body interface.
The following properties are not implemented in node-fetch at this moment:
Response.error()Response.redirect()typetrailer(spec-compliant)
body A String or Readable streamoptions A ResponseInit options dictionaryConstructs a new Response object. The constructor is identical to that in the browser.
Because Node.js does not implement service workers (for which this class was designed), one rarely has to construct a Response directly.
(spec-compliant)
Convenience property representing if the request ended normally. Will evaluate to true if the response status was greater than or equal to 200 but smaller than 300.
(spec-compliant)
Convenience property representing if the request has been redirected at least once. Will evaluate to true if the internal redirect counter is greater than 0.
This class allows manipulating and iterating over a set of HTTP headers. All methods specified in the Fetch Standard are implemented.
(spec-compliant)
init Optional argument to pre-fill the Headers objectConstruct a new Headers object. init can be either null, a Headers object, an key-value map object or any iterable object.
// Example adapted from https://fetch.spec.whatwg.org/#example-headers-class
const meta = {
'Content-Type': 'text/xml',
'Breaking-Bad': '<3'
};
const headers = new Headers(meta);
// The above is equivalent to
const meta = [
[ 'Content-Type', 'text/xml' ],
[ 'Breaking-Bad', '<3' ]
];
const headers = new Headers(meta);
// You can in fact use any iterable objects, like a Map or even another Headers
const meta = new Map();
meta.set('Content-Type', 'text/xml');
meta.set('Breaking-Bad', '<3');
const headers = new Headers(meta);
const copyOfHeaders = new Headers(headers);Body is an abstract interface with methods that are applicable to both Request and Response classes.
The following methods are not yet implemented in node-fetch at this moment:
formData()(deviation from spec)
Readable streamData are encapsulated in the Body object. Note that while the Fetch Standard requires the property to always be a WHATWG ReadableStream, in node-fetch it is a Node.js Readable stream.
(spec-compliant)
BooleanA boolean property for if this body has been consumed. Per the specs, a consumed body cannot be used again.
(spec-compliant)
PromiseConsume the body and return a promise that will resolve to one of these formats.
(node-fetch extension)
Promise<Buffer>Consume the body and return a promise that will resolve to a Buffer.
(node-fetch extension)
Promise<String>Identical to body.text(), except instead of always converting to UTF-8, encoding sniffing will be performed and text converted to UTF-8 if possible.
(This API requires an optional dependency of the npm package encoding, which you need to install manually. webpack users may see a warning message due to this optional dependency.)
(node-fetch extension)
An operational error in the fetching process. See ERROR-HANDLING.md for more info.
(node-fetch extension)
An Error thrown when the request is aborted in response to an AbortSignal’s abort event. It has a name property of AbortError. See ERROR-HANDLING.MD for more info.
Thanks to github/fetch for providing a solid implementation reference.
node-fetch v1 was maintained by [@bitinn](https://github.com/bitinn); v2 was maintained by [@TimothyGu](https://github.com/timothygu), [@bitinn](https://github.com/bitinn) and [@jimmywarting](https://github.com/jimmywarting); v2 readme is written by [@jkantr](https://github.com/jkantr).
This module provides several classes in support of Joyent’s Best Practices for Error Handling in Node.js. If you find any of the behavior here confusing or surprising, check out that document first.
The error classes here support:
The classes here are:
First, install the package:
npm install verror
If nothing else, you can use VError as a drop-in replacement for the built-in JavaScript Error class, with the addition of printf-style messages:
This prints:
missing file: “/etc/passwd”
You can also pass a cause argument, which is any other Error object:
var fs = require('fs');
var filename = '/nonexistent';
fs.stat(filename, function (err1) {
var err2 = new VError(err1, 'stat "%s"', filename);
console.error(err2.message);
});This prints out:
stat “/nonexistent”: ENOENT, stat ‘/nonexistent’
which resembles how Unix programs typically report errors:
$ sort /nonexistent sort: open failed: /nonexistent: No such file or directory
To match the Unixy feel, when you print out the error, just prepend the program’s name to the VError’s message. Or just call node-cmdutil.fail(your_verror), which does this for you.
You can get the next-level Error using err.cause():
prints:
ENOENT, stat ‘/nonexistent’
Of course, you can chain these as many times as you want, and it works with any kind of Error:
var err1 = new Error('No such file or directory');
var err2 = new VError(err1, 'failed to stat "%s"', '/junk');
var err3 = new VError(err2, 'request failed');
console.error(err3.message);This prints:
request failed: failed to stat “/junk”: No such file or directory
The idea is that each layer in the stack annotates the error with a description of what it was doing. The end result is a message that explains what happened at each level.
You can also decorate Error objects with additional information so that callers can not only handle each kind of error differently, but also construct their own error messages (e.g., to localize them, format them, group them by type, and so on). See the example below.
The two main goals for VError are:
"ip": "192.168.1.2" and "tcpPort": 80. This can be used for feeding into monitoring systems, analyzing large numbers of Errors (as from a log file), or localizing error messages.To really make this useful, it also needs to be easy to compose Errors: higher-level code should be able to augment the Errors reported by lower-level code to provide a more complete description of what happened. Instead of saying “connection refused”, you can say “operation X failed: connection refused”. That’s why VError supports causes.
In order for all this to work, programmers need to know that it’s generally safe to wrap lower-level Errors with higher-level ones. If you have existing code that handles Errors produced by a library, you should be able to wrap those Errors with a VError to add information without breaking the error handling code. There are two obvious ways that this could break such consumers:
name to determine what kind of Error they’ve got. To ensure compatibility, you can create VErrors with custom names, but this approach isn’t great because it prevents you from representing complex failures. For this reason, VError provides findCauseByName, which essentially asks: does this Error or any of its causes have this specific type? If error handling code uses findCauseByName, then subsystems can construct very specific causal chains for debuggability and still let people handle simple cases easily. There’s an example below.name, message, and stack, but also fileName, lineNumber, and a few others. Plus, it’s useful for some Error subclasses to have their own private properties – and there’d be no way to know whether these should be copied. For these reasons, VError first-classes these information properties. You have to provide them in the constructor, you can only fetch them with the info() function, and VError takes care of making sure properties from causes wind up in the info() output.Let’s put this all together with an example from the node-fast RPC library. node-fast implements a simple RPC protocol for Node programs. There’s a server and client interface, and clients make RPC requests to servers. Let’s say the server fails with an UnauthorizedError with message “user ‘bob’ is not authorized”. The client wraps all server errors with a FastServerError. The client also wraps all request errors with a FastRequestError that includes the name of the RPC call being made. The result of this failed RPC might look like this:
name: FastRequestError message: “request failed: server error: user ‘bob’ is not authorized” rpcMsgid:
When the caller uses VError.info(), the information properties are collapsed so that it looks like this:
message: “request failed: server error: user ‘bob’ is not authorized” rpcMsgid:
Taking this apart:
findCauseByName('FastServerError') rather than checking the name field directly.It’s not expected that you’d use these complex forms all the time. Despite supporting the complex case above, you can still just do:
new VError(“my service isn’t working”);
for the simple cases.
VError, WError, and SError are convenient drop-in replacements for Error that support printf-style arguments, first-class causes, informational properties, and other useful features.
The VError constructor has several forms:
/*
* This is the most general form. You can specify any supported options
* (including "cause" and "info") this way.
*/
new VError(options, sprintf_args...)
/*
* This is a useful shorthand when the only option you need is "cause".
*/
new VError(cause, sprintf_args...)
/*
* This is a useful shorthand when you don't need any options at all.
*/
new VError(sprintf_args...)All of these forms construct a new VError that behaves just like the built-in JavaScript Error class, with some additional methods described below.
In the first form, options is a plain object with any of the following optional properties:
| Option name | Type | Meaning |
|---|---|---|
name |
string | Describes what kind of error this is. This is intended for programmatic use to distinguish between different kinds of errors. Note that in modern versions of Node.js, this name is ignored in the stack property value, but callers can still use the name property to get at it. |
cause |
any Error object | Indicates that the new error was caused by cause. See cause() below. If unspecified, the cause will be null. |
strict |
boolean | If true, then null and undefined values in sprintf_args are passed through to sprintf(). Otherwise, these are replaced with the strings 'null', and ‘undefined’, respectively. |
constructorOpt |
function | If specified, then the stack trace for this error ends at function constructorOpt. Functions called by constructorOpt will not show up in the stack. This is useful when this class is subclassed. |
info |
object | Specifies arbitrary informational properties that are available through the VError.info(err) static class method. See that method for details. |
The second form is equivalent to using the first form with the specified cause as the error’s cause. This form is distinguished from the first form because the first argument is an Error.
The third form is equivalent to using the first form with all default option values. This form is distinguished from the other forms because the first argument is not an object or an Error.
The WError constructor is used exactly the same way as the VError constructor. The SError constructor is also used the same way as the VError constructor except that in all cases, the strict property is overriden to `true.
VError, WError, and SError all provide the same public properties as JavaScript’s built-in Error objects.
| Property name | Type | Meaning |
|---|---|---|
name |
string | Programmatically-usable name of the error. |
message |
string | Human-readable summary of the failure. Programmatically-accessible details are provided through VError.info(err) class method. |
stack |
string | Human-readable stack trace where the Error was constructed. |
For all of these classes, the printf-style arguments passed to the constructor are processed with sprintf() to form a message. For WError, this becomes the complete message property. For SError and VError, this message is prepended to the message of the cause, if any (with a suitable separator), and the result becomes the message property.
The stack property is managed entirely by the underlying JavaScript implementation. It’s generally implemented using a getter function because constructing the human-readable stack trace is somewhat expensive.
The following methods are defined on the VError class and as exported functions on the verror module. They’re defined this way rather than using methods on VError instances so that they can be used on Errors not created with VError.
VError.cause(err)The cause() function returns the next Error in the cause chain for err, or null if there is no next error. See the cause argument to the constructor. Errors can have arbitrarily long cause chains. You can walk the cause chain by invoking VError.cause(err) on each subsequent return value. If err is not a VError, the cause is null.
VError.info(err)Returns an object with all of the extra error information that’s been associated with this Error and all of its causes. These are the properties passed in using the info option to the constructor. Properties not specified in the constructor for this Error are implicitly inherited from this error’s cause.
These properties are intended to provide programmatically-accessible metadata about the error. For an error that indicates a failure to resolve a DNS name, informational properties might include the DNS name to be resolved, or even the list of resolvers used to resolve it. The values of these properties should generally be plain objects (i.e., consisting only of null, undefined, numbers, booleans, strings, and objects and arrays containing only other plain objects).
VError.fullStack(err)Returns a string containing the full stack trace, with all nested errors recursively reported as 'caused by:' + err.stack.
VError.findCauseByName(err, name)The findCauseByName() function traverses the cause chain for err, looking for an error whose name property matches the passed in name value. If no match is found, null is returned.
If all you want is to know whether there’s a cause (and you don’t care what it is), you can use VError.hasCauseWithName(err, name).
If a vanilla error or a non-VError error is passed in, then there is no cause chain to traverse. In this scenario, the function will check the name property of only err.
VError.hasCauseWithName(err, name)Returns true if and only if VError.findCauseByName(err, name) would return a non-null value. This essentially determines whether err has any cause in its cause chain that has name name.
VError.errorFromList(errors)Given an array of Error objects (possibly empty), return a single error representing the whole collection of errors. If the list has:
nullThis is useful for cases where an operation may produce any number of errors, and you ultimately want to implement the usual callback(err) pattern. You can accumulate the errors in an array and then invoke callback(VError.errorFromList(errors)) when the operation is complete.
VError.errorForEach(err, func)Convenience function for iterating an error that may itself be a MultiError.
In all cases, err must be an Error. If err is a MultiError, then func is invoked as func(errorN) for each of the underlying errors of the MultiError. If err is any other kind of error, func is invoked once as func(err). In all cases, func is invoked synchronously.
This is useful for cases where an operation may produce any number of warnings that may be encapsulated with a MultiError – but may not be.
This function does not iterate an error’s cause chain.
The “Demo” section above covers several basic cases. Here’s a more advanced case:
var err1 = new VError('something bad happened');
/* ... */
var err2 = new VError({
'name': 'ConnectionError',
'cause': err1,
'info': {
'errno': 'ECONNREFUSED',
'remote_ip': '127.0.0.1',
'port': 215
}
}, 'failed to connect to "%s:%d"', '127.0.0.1', 215);
console.log(err2.message);
console.log(err2.name);
console.log(VError.info(err2));
console.log(err2.stack);This outputs:
failed to connect to “127.0.0.1:215”: something bad happened ConnectionError { errno: ‘ECONNREFUSED’, remote_ip: ‘127.0.0.1’, port: 215 } ConnectionError: failed to connect to “127.0.0.1:215”: something bad happened at Object.
Information properties are inherited up the cause chain, with values at the top of the chain overriding same-named values lower in the chain. To continue that example:
var err3 = new VError({
'name': 'RequestError',
'cause': err2,
'info': {
'errno': 'EBADREQUEST'
}
}, 'request failed');
console.log(err3.message);
console.log(err3.name);
console.log(VError.info(err3));
console.log(err3.stack);This outputs:
request failed: failed to connect to “127.0.0.1:215”: something bad happened RequestError { errno: ‘EBADREQUEST’, remote_ip: ‘127.0.0.1’, port: 215 } RequestError: request failed: failed to connect to “127.0.0.1:215”: something bad happened at Object.
You can also print the complete stack trace of combined Errors by using VError.fullStack(err).
var err1 = new VError('something bad happened');
/* ... */
var err2 = new VError(err1, 'something really bad happened here');
console.log(VError.fullStack(err2));This outputs:
VError: something really bad happened here: something bad happened at Object.
VError.fullStack is also safe to use on regular Errors, so feel free to use it whenever you need to extract the stack trace from an Error, regardless if it’s a VError or not.
MultiError is an Error class that represents a group of Errors. This is used when you logically need to provide a single Error, but you want to preserve information about multiple underying Errors. A common case is when you execute several operations in parallel and some of them fail.
MultiErrors are constructed as:
error_list is an array of at least one Error object.
The cause of the MultiError is the first error provided. None of the other VError options are supported. The message for a MultiError consists the message from the first error, prepended with a message indicating that there were other errors.
For example:
err = new MultiError([
new Error('failed to resolve DNS name "abc.example.com"'),
new Error('failed to resolve DNS name "def.example.com"'),
]);
console.error(err.message);outputs:
first of 2 errors: failed to resolve DNS name “abc.example.com”
See the convenience function VError.errorFromList, which is sometimes simpler to use than this constructor.
errors()Returns an array of the errors used to construct this MultiError.
See separate contribution guidelines.
Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
See the changelog for details.
Brace patterns make globs more powerful by adding the ability to match specific ranges and sequences of characters.
The main export is a function that takes one or more brace patterns and options.
const braces = require('braces');
// braces(patterns[, options]);
console.log(braces(['{01..05}', '{a..e}']));
//=> ['(0[1-5])', '([a-e])']
console.log(braces(['{01..05}', '{a..e}'], { expand: true }));
//=> ['01', '02', '03', '04', '05', 'a', 'b', 'c', 'd', 'e']By default, brace patterns are compiled into strings that are optimized for creating regular expressions and matching.
Compiled
console.log(braces('a/{x,y,z}/b'));
//=> ['a/(x|y|z)/b']
console.log(braces(['a/{01..20}/b', 'a/{1..5}/b']));
//=> [ 'a/(0[1-9]|1[0-9]|20)/b', 'a/([1-5])/b' ]Expanded
Enable brace expansion by setting the expand option to true, or by using braces.expand() (returns an array similar to what you’d expect from Bash, or echo {1..5}, or minimatch):
console.log(braces('a/{x,y,z}/b', { expand: true }));
//=> ['a/x/b', 'a/y/b', 'a/z/b']
console.log(braces.expand('{01..10}'));
//=> ['01','02','03','04','05','06','07','08','09','10']Expand lists (like Bash “sets”):
console.log(braces('a/{foo,bar,baz}/*.js'));
//=> ['a/(foo|bar|baz)/*.js']
console.log(braces.expand('a/{foo,bar,baz}/*.js'));
//=> ['a/foo/*.js', 'a/bar/*.js', 'a/baz/*.js']Expand ranges of characters (like Bash “sequences”):
console.log(braces.expand('{1..3}')); // ['1', '2', '3']
console.log(braces.expand('a/{1..3}/b')); // ['a/1/b', 'a/2/b', 'a/3/b']
console.log(braces('{a..c}', { expand: true })); // ['a', 'b', 'c']
console.log(braces('foo/{a..c}', { expand: true })); // ['foo/a', 'foo/b', 'foo/c']
// supports zero-padded ranges
console.log(braces('a/{01..03}/b')); //=> ['a/(0[1-3])/b']
console.log(braces('a/{001..300}/b')); //=> ['a/(0{2}[1-9]|0[1-9][0-9]|[12][0-9]{2}|300)/b']See fill-range for all available range-expansion options.
Steps, or increments, may be used with ranges:
console.log(braces.expand('{2..10..2}'));
//=> ['2', '4', '6', '8', '10']
console.log(braces('{2..10..2}'));
//=> ['(2|4|6|8|10)']When the .optimize method is used, or options.optimize is set to true, sequences are passed to to-regex-range for expansion.
Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved.
“Expanded” braces
console.log(braces.expand('a{b,c,/{x,y}}/e'));
//=> ['ab/e', 'ac/e', 'a/x/e', 'a/y/e']
console.log(braces.expand('a/{x,{1..5},y}/c'));
//=> ['a/x/c', 'a/1/c', 'a/2/c', 'a/3/c', 'a/4/c', 'a/5/c', 'a/y/c']“Optimized” braces
console.log(braces('a{b,c,/{x,y}}/e'));
//=> ['a(b|c|/(x|y))/e']
console.log(braces('a/{x,{1..5},y}/c'));
//=> ['a/(x|([1-5])|y)/c']Escaping braces
A brace pattern will not be expanded or evaluted if either the opening or closing brace is escaped:
console.log(braces.expand('a\\{d,c,b}e'));
//=> ['a{d,c,b}e']
console.log(braces.expand('a{d,c,b\\}e'));
//=> ['a{d,c,b}e']Escaping commas
Commas inside braces may also be escaped:
console.log(braces.expand('a{b\\,c}d'));
//=> ['a{b,c}d']
console.log(braces.expand('a{d\\,c,b}e'));
//=> ['ad,ce', 'abe']Single items
Following bash conventions, a brace pattern is also not expanded when it contains a single character:
Type: Number
Default: 65,536
Description: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera.
Type: Boolean
Default: undefined
Description: Generate an “expanded” brace pattern (alternatively you can use the braces.expand() method, which does the same thing).
Type: Boolean
Default: undefined
Description: Remove duplicates from the returned array.
Type: Number
Default: 1000
Description: To prevent malicious patterns from being passed by users, an error is thrown when braces.expand() is used or options.expand is true and the generated range will exceed the rangeLimit.
You can customize options.rangeLimit or set it to Inifinity to disable this altogether.
Examples
// pattern exceeds the "rangeLimit", so it's optimized automatically
console.log(braces.expand('{1..1000}'));
//=> ['([1-9]|[1-9][0-9]{1,2}|1000)']
// pattern does not exceed "rangeLimit", so it's NOT optimized
console.log(braces.expand('{1..100}'));
//=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100']Type: Function
Default: undefined
Description: Customize range expansion.
Example: Transforming non-numeric values
const alpha = braces.expand('x/{a..e}/y', {
transform(value, index) {
// When non-numeric values are passed, "value" is a character code.
return 'foo/' + String.fromCharCode(value) + '-' + index;
}
});
console.log(alpha);
//=> [ 'x/foo/a-0/y', 'x/foo/b-1/y', 'x/foo/c-2/y', 'x/foo/d-3/y', 'x/foo/e-4/y' ]Example: Transforming numeric values
const numeric = braces.expand('{1..5}', {
transform(value) {
// when numeric values are passed, "value" is a number
return 'foo/' + value * 2;
}
});
console.log(numeric);
//=> [ 'foo/2', 'foo/4', 'foo/6', 'foo/8', 'foo/10' ]Type: Boolean
Default: undefined
Description: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, a{1,3} will match the letter a one to three times.
Unfortunately, regex quantifiers happen to share the same syntax as Bash lists
The quantifiers option tells braces to detect when regex quantifiers are defined in the given pattern, and not to try to expand them as lists.
Examples
const braces = require('braces');
console.log(braces('a/b{1,3}/{x,y,z}'));
//=> [ 'a/b(1|3)/(x|y|z)' ]
console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true}));
//=> [ 'a/b{1,3}/(x|y|z)' ]
console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true, expand: true}));
//=> [ 'a/b{1,3}/x', 'a/b{1,3}/y', 'a/b{1,3}/z' ]Type: Boolean
Default: undefined
Description: Strip backslashes that were used for escaping from the result.
Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs).
In addition to “expansion”, braces are also used for matching. In other words:
More about brace expansion (click to expand)
There are two main types of brace expansion:
{a,b,c}a{1..3}b. Optionally, a third argument may be passed to define a “step” or increment to use: a{1..100..10}b. These are also sometimes referred to as “ranges”.Here are some example brace patterns to illustrate how they work:
Sets
{a,b,c} => a b c
{a,b,c}{1,2} => a1 a2 b1 b2 c1 c2
Sequences
{1..9} => 1 2 3 4 5 6 7 8 9
{4..-4} => 4 3 2 1 0 -1 -2 -3 -4
{1..20..3} => 1 4 7 10 13 16 19
{a..j} => a b c d e f g h i j
{j..a} => j i h g f e d c b a
{a..z..3} => a d g j m p s v y
Combination
Sets and sequences can be mixed together or used along with any other strings.
{a,b,c}{1..3} => a1 a2 a3 b1 b2 b3 c1 c2 c3
foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar
The fact that braces can be “expanded” from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases.
In addition to expansion, brace patterns are also useful for performing regular-expression-like matching.
For example, the pattern foo/{1..3}/bar would match any of following strings:
foo/1/bar
foo/2/bar
foo/3/bar
But not:
baz/1/qux
baz/2/qux
baz/3/qux
Braces can also be combined with glob patterns to perform more advanced wildcard matching. For example, the pattern */{1..3}/* would match any of following strings:
foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux
Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of.
“brace bombs”
For a more detailed explanation with examples, see the geometric complexity section.
Jump to the performance section to see how Braces solves this problem in comparison to other libraries.
At minimum, brace patterns with sets limited to two elements have quadradic or O(n^2) complexity. But the complexity of the algorithm increases exponentially as the number of sets, and elements per set, increases, which is O(n^c).
For example, the following sets demonstrate quadratic (O(n^2)) complexity:
{1,2}{3,4} => (2X2) => 13 14 23 24
{1,2}{3,4}{5,6} => (2X2X2) => 135 136 145 146 235 236 245 246
But add an element to a set, and we get a n-fold Cartesian product with O(n^c) complexity:
{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248
249 257 258 259 267 268 269 347 348 349 357
358 359 367 368 369
Now, imagine how this complexity grows given that each element is a n-tuple:
{1..100}{1..100} => (100X100) => 10,000 elements (38.4 kB)
{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB)
Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control.
More information
Interested in learning more about brace expansion?
Braces is not only screaming fast, it’s also more accurate the other brace expansion libraries.
Fortunately there is a solution to the “brace bomb” problem: don’t expand brace patterns into an array when they’re used for matching.
Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently.
The proof is in the numbers
Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using braces() and minimatch.braceExpand(), respectively.
| Pattern | braces | minimatch |
|---|---|---|
{1..9007199254740991}[^1] |
298 B (5ms 459μs) |
N/A (freezes) |
{1..1000000000000000} |
41 B (1ms 15μs) |
N/A (freezes) |
{1..100000000000000} |
40 B (890μs) |
N/A (freezes) |
{1..10000000000000} |
39 B (2ms 49μs) |
N/A (freezes) |
{1..1000000000000} |
38 B (608μs) |
N/A (freezes) |
{1..100000000000} |
37 B (397μs) |
N/A (freezes) |
{1..10000000000} |
35 B (983μs) |
N/A (freezes) |
{1..1000000000} |
34 B (798μs) |
N/A (freezes) |
{1..100000000} |
33 B (733μs) |
N/A (freezes) |
{1..10000000} |
32 B (5ms 632μs) |
78.89 MB (16s 388ms 569μs) |
{1..1000000} |
31 B (1ms 381μs) |
6.89 MB (1s 496ms 887μs) |
{1..100000} |
30 B (950μs) |
588.89 kB (146ms 921μs) |
{1..10000} |
29 B (1ms 114μs) |
48.89 kB (14ms 187μs) |
{1..1000} |
28 B (760μs) |
3.89 kB (1ms 453μs) |
{1..100} |
22 B (345μs) |
291 B (196μs) |
{1..10} |
10 B (533μs) |
20 B (37μs) |
{1..3} |
7 B (190μs) |
5 B (27μs) |
When you need expansion, braces is still much faster.
(the following results were generated using braces.expand() and minimatch.braceExpand(), respectively)
| Pattern | braces | minimatch |
|---|---|---|
{1..10000000} |
78.89 MB (2s 698ms 642μs) |
78.89 MB (18s 601ms 974μs) |
{1..1000000} |
6.89 MB (458ms 576μs) |
6.89 MB (1s 491ms 621μs) |
{1..100000} |
588.89 kB (20ms 728μs) |
588.89 kB (156ms 919μs) |
{1..10000} |
48.89 kB (2ms 202μs) |
48.89 kB (13ms 641μs) |
{1..1000} |
3.89 kB (1ms 796μs) |
3.89 kB (1ms 958μs) |
{1..100} |
291 B (424μs) |
291 B (211μs) |
{1..10} |
20 B (487μs) |
20 B (72μs) |
{1..3} |
5 B (166μs) |
5 B (27μs) |
If you’d like to run these comparisons yourself, see test/support/generate.js.
Install dev dependencies:
Braces is more accurate, without sacrificing performance.
# range (expanded)
braces x 29,040 ops/sec ±3.69% (91 runs sampled))
minimatch x 4,735 ops/sec ±1.28% (90 runs sampled)
# range (optimized for regex)
braces x 382,878 ops/sec ±0.56% (94 runs sampled)
minimatch x 1,040 ops/sec ±0.44% (93 runs sampled)
# nested ranges (expanded)
braces x 19,744 ops/sec ±2.27% (92 runs sampled))
minimatch x 4,579 ops/sec ±0.50% (93 runs sampled)
# nested ranges (optimized for regex)
braces x 246,019 ops/sec ±2.02% (93 runs sampled)
minimatch x 1,028 ops/sec ±0.39% (94 runs sampled)
# set (expanded)
braces x 138,641 ops/sec ±0.53% (95 runs sampled)
minimatch x 219,582 ops/sec ±0.98% (94 runs sampled)
# set (optimized for regex)
braces x 388,408 ops/sec ±0.41% (95 runs sampled)
minimatch x 44,724 ops/sec ±0.91% (89 runs sampled)
# nested sets (expanded)
braces x 84,966 ops/sec ±0.48% (94 runs sampled)
minimatch x 140,720 ops/sec ±0.37% (95 runs sampled)
# nested sets (optimized for regex)
braces x 263,340 ops/sec ±2.06% (92 runs sampled)
minimatch x 28,714 ops/sec ±0.40% (90 runs sampled)Contributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
| Commits | Contributor |
|---|---|
| 197 | jonschlinkert |
| 4 | doowb |
| 1 | es128 |
| 1 | eush77 |
| 1 | hemanth |
| 1 | wtgtybhertgeghgtwtg |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.8.0, on April 08, 2019. # snapdragon-util
Utilities for the snapdragon parser/compiler.
Table of Contents
Install with npm:
Install with yarn:
Returns true if the given value is a node.
Params
node {Object}: Instance of snapdragon-nodereturns {Boolean}Example
var Node = require('snapdragon-node');
var node = new Node({type: 'foo'});
console.log(utils.isNode(node)); //=> true
console.log(utils.isNode({})); //=> falseEmit an empty string for the given node.
Params
node {Object}: Instance of snapdragon-nodereturns {undefined}Example
Appdend node.val to compiler.output, exactly as it was created by the parser.
Params
node {Object}: Instance of snapdragon-nodereturns {undefined}Example
Previously named .emit, this method appends the given val to compiler.output for the given node. Useful when you know what value should be appended advance, regardless of the actual value of node.val.
Params
node {Object}: Instance of snapdragon-nodereturns {Function}: Returns a compiler middleware function.Example
snapdragon.compiler
.set('i', function(node) {
this.mapVisit(node);
})
.set('i.open', utils.append('<i>'))
.set('i.close', utils.append('</i>'))Used in compiler middleware, this onverts an AST node into an empty text node and deletes node.nodes if it exists. The advantage of this method is that, as opposed to completely removing the node, indices will not need to be re-calculated in sibling nodes, and nothing is appended to the output.
Params
node {Object}: Instance of snapdragon-nodenodes {Array}: Optionally pass a new nodes value, to replace the existing node.nodes array.Example
utils.toNoop(node);
// convert `node.nodes` to the given value instead of deleting it
utils.toNoop(node, []);Visit node with the given fn. The built-in .visit method in snapdragon automatically calls registered compilers, this allows you to pass a visitor function.
Params
node {Object}: Instance of snapdragon-nodefn {Function}returns {Object}: returns the node after recursively visiting all child nodes.Example
snapdragon.compiler.set('i', function(node) {
utils.visit(node, function(childNode) {
// do stuff with "childNode"
return childNode;
});
});Map visit the given fn over node.nodes. This is called by visit, use this method if you do not want fn to be called on the first node.
Params
node {Object}: Instance of snapdragon-nodeoptions {Object}fn {Function}returns {Object}: returns the nodeExample
snapdragon.compiler.set('i', function(node) {
utils.mapVisit(node, function(childNode) {
// do stuff with "childNode"
return childNode;
});
});Unshift an *.open node onto node.nodes.
Params
node {Object}: Instance of snapdragon-nodeNode {Function}: (required) Node constructor function from snapdragon-node.filter {Function}: Optionaly specify a filter function to exclude the node.returns {Object}: Returns the created opening node.Example
var Node = require('snapdragon-node');
snapdragon.parser.set('brace', function(node) {
var match = this.match(/^{/);
if (match) {
var parent = new Node({type: 'brace'});
utils.addOpen(parent, Node);
console.log(parent.nodes[0]):
// { type: 'brace.open', val: '' };
// push the parent "brace" node onto the stack
this.push(parent);
// return the parent node, so it's also added to the AST
return brace;
}
});Push a *.close node onto node.nodes.
Params
node {Object}: Instance of snapdragon-nodeNode {Function}: (required) Node constructor function from snapdragon-node.filter {Function}: Optionaly specify a filter function to exclude the node.returns {Object}: Returns the created closing node.Example
var Node = require('snapdragon-node');
snapdragon.parser.set('brace', function(node) {
var match = this.match(/^}/);
if (match) {
var parent = this.parent();
if (parent.type !== 'brace') {
throw new Error('missing opening: ' + '}');
}
utils.addClose(parent, Node);
console.log(parent.nodes[parent.nodes.length - 1]):
// { type: 'brace.close', val: '' };
// no need to return a node, since the parent
// was already added to the AST
return;
}
});Wraps the given node with *.open and *.close nodes.
Params
node {Object}: Instance of snapdragon-nodeNode {Function}: (required) Node constructor function from snapdragon-node.filter {Function}: Optionaly specify a filter function to exclude the node.returns {Object}: Returns the nodePush the given node onto parent.nodes, and set parent as `node.parent.
Params
parent {Object}node {Object}: Instance of snapdragon-nodereturns {Object}: Returns the child nodeExample
var parent = new Node({type: 'foo'});
var node = new Node({type: 'bar'});
utils.pushNode(parent, node);
console.log(parent.nodes[0].type) // 'bar'
console.log(node.parent.type) // 'foo'Unshift node onto parent.nodes, and set parent as `node.parent.
Params
parent {Object}node {Object}: Instance of snapdragon-nodereturns {undefined}Example
var parent = new Node({type: 'foo'});
var node = new Node({type: 'bar'});
utils.unshiftNode(parent, node);
console.log(parent.nodes[0].type) // 'bar'
console.log(node.parent.type) // 'foo'Pop the last node off of parent.nodes. The advantage of using this method is that it checks for node.nodes and works with any version of snapdragon-node.
Params
parent {Object}node {Object}: Instance of snapdragon-nodereturns {Number|Undefined}: Returns the length of node.nodes or undefined.Example
var parent = new Node({type: 'foo'});
utils.pushNode(parent, new Node({type: 'foo'}));
utils.pushNode(parent, new Node({type: 'bar'}));
utils.pushNode(parent, new Node({type: 'baz'}));
console.log(parent.nodes.length); //=> 3
utils.popNode(parent);
console.log(parent.nodes.length); //=> 2Shift the first node off of parent.nodes. The advantage of using this method is that it checks for node.nodes and works with any version of snapdragon-node.
Params
parent {Object}node {Object}: Instance of snapdragon-nodereturns {Number|Undefined}: Returns the length of node.nodes or undefined.Example
var parent = new Node({type: 'foo'});
utils.pushNode(parent, new Node({type: 'foo'}));
utils.pushNode(parent, new Node({type: 'bar'}));
utils.pushNode(parent, new Node({type: 'baz'}));
console.log(parent.nodes.length); //=> 3
utils.shiftNode(parent);
console.log(parent.nodes.length); //=> 2Remove the specified node from parent.nodes.
Params
parent {Object}node {Object}: Instance of snapdragon-nodereturns {Object|undefined}: Returns the removed node, if successful, or undefined if it does not exist on parent.nodes.Example
var parent = new Node({type: 'abc'});
var foo = new Node({type: 'foo'});
utils.pushNode(parent, foo);
utils.pushNode(parent, new Node({type: 'bar'}));
utils.pushNode(parent, new Node({type: 'baz'}));
console.log(parent.nodes.length); //=> 3
utils.removeNode(parent, foo);
console.log(parent.nodes.length); //=> 2Returns true if node.type matches the given type. Throws a TypeError if node is not an instance of Node.
Params
node {Object}: Instance of snapdragon-nodetype {String}returns {Boolean}Example
var Node = require('snapdragon-node');
var node = new Node({type: 'foo'});
console.log(utils.isType(node, 'foo')); // false
console.log(utils.isType(node, 'bar')); // trueReturns true if the given node has the given type in node.nodes. Throws a TypeError if node is not an instance of Node.
Params
node {Object}: Instance of snapdragon-nodetype {String}returns {Boolean}Example
var Node = require('snapdragon-node');
var node = new Node({
type: 'foo',
nodes: [
new Node({type: 'bar'}),
new Node({type: 'baz'})
]
});
console.log(utils.hasType(node, 'xyz')); // false
console.log(utils.hasType(node, 'baz')); // trueReturns the first node from node.nodes of the given type
Params
nodes {Array}type {String}returns {Object|undefined}: Returns the first matching node or undefined.Example
var node = new Node({
type: 'foo',
nodes: [
new Node({type: 'text', val: 'abc'}),
new Node({type: 'text', val: 'xyz'})
]
});
var textNode = utils.firstOfType(node.nodes, 'text');
console.log(textNode.val);
//=> 'abc'Returns the node at the specified index, or the first node of the given type from node.nodes.
Params
nodes {Array}type {String|Number}: Node type or index.returns {Object}: Returns a node or undefined.Example
var node = new Node({
type: 'foo',
nodes: [
new Node({type: 'text', val: 'abc'}),
new Node({type: 'text', val: 'xyz'})
]
});
var nodeOne = utils.findNode(node.nodes, 'text');
console.log(nodeOne.val);
//=> 'abc'
var nodeTwo = utils.findNode(node.nodes, 1);
console.log(nodeTwo.val);
//=> 'xyz'Returns true if the given node is an "*.open" node.
Params
node {Object}: Instance of snapdragon-nodereturns {Boolean}Example
var Node = require('snapdragon-node');
var brace = new Node({type: 'brace'});
var open = new Node({type: 'brace.open'});
var close = new Node({type: 'brace.close'});
console.log(utils.isOpen(brace)); // false
console.log(utils.isOpen(open)); // true
console.log(utils.isOpen(close)); // falseReturns true if the given node is a "*.close" node.
Params
node {Object}: Instance of snapdragon-nodereturns {Boolean}Example
var Node = require('snapdragon-node');
var brace = new Node({type: 'brace'});
var open = new Node({type: 'brace.open'});
var close = new Node({type: 'brace.close'});
console.log(utils.isClose(brace)); // false
console.log(utils.isClose(open)); // false
console.log(utils.isClose(close)); // trueReturns true if node.nodes has an .open node
Params
node {Object}: Instance of snapdragon-nodereturns {Boolean}Example
var Node = require('snapdragon-node');
var brace = new Node({
type: 'brace',
nodes: []
});
var open = new Node({type: 'brace.open'});
console.log(utils.hasOpen(brace)); // false
brace.pushNode(open);
console.log(utils.hasOpen(brace)); // trueReturns true if node.nodes has a .close node
Params
node {Object}: Instance of snapdragon-nodereturns {Boolean}Example
var Node = require('snapdragon-node');
var brace = new Node({
type: 'brace',
nodes: []
});
var close = new Node({type: 'brace.close'});
console.log(utils.hasClose(brace)); // false
brace.pushNode(close);
console.log(utils.hasClose(brace)); // trueReturns true if node.nodes has both .open and .close nodes
Params
node {Object}: Instance of snapdragon-nodereturns {Boolean}Example
var Node = require('snapdragon-node');
var brace = new Node({
type: 'brace',
nodes: []
});
var open = new Node({type: 'brace.open'});
var close = new Node({type: 'brace.close'});
console.log(utils.hasOpen(brace)); // false
console.log(utils.hasClose(brace)); // false
brace.pushNode(open);
brace.pushNode(close);
console.log(utils.hasOpen(brace)); // true
console.log(utils.hasClose(brace)); // truePush the given node onto the state.inside array for the given type. This array is used as a specialized “stack” for only the given node.type.
Params
state {Object}: The compiler.state object or custom state object.node {Object}: Instance of snapdragon-nodereturns {Array}: Returns the state.inside stack for the given type.Example
var state = { inside: {}};
var node = new Node({type: 'brace'});
utils.addType(state, node);
console.log(state.inside);
//=> { brace: [{type: 'brace'}] }Remove the given node from the state.inside array for the given type. This array is used as a specialized “stack” for only the given node.type.
Params
state {Object}: The compiler.state object or custom state object.node {Object}: Instance of snapdragon-nodereturns {Array}: Returns the state.inside stack for the given type.Example
var state = { inside: {}};
var node = new Node({type: 'brace'});
utils.addType(state, node);
console.log(state.inside);
//=> { brace: [{type: 'brace'}] }
utils.removeType(state, node);
//=> { brace: [] }Returns true if node.val is an empty string, or node.nodes does not contain any non-empty text nodes.
Params
node {Object}: Instance of snapdragon-nodefn {Function}returns {Boolean}Example
var node = new Node({type: 'text'});
utils.isEmpty(node); //=> true
node.val = 'foo';
utils.isEmpty(node); //=> falseReturns true if the state.inside stack for the given type exists and has one or more nodes on it.
Params
state {Object}type {String}returns {Boolean}Example
var state = { inside: {}};
var node = new Node({type: 'brace'});
console.log(utils.isInsideType(state, 'brace')); //=> false
utils.addType(state, node);
console.log(utils.isInsideType(state, 'brace')); //=> true
utils.removeType(state, node);
console.log(utils.isInsideType(state, 'brace')); //=> falseReturns true if node is either a child or grand-child of the given type, or state.inside[type] is a non-empty array.
Params
state {Object}: Either the compiler.state object, if it exists, or a user-supplied state object.node {Object}: Instance of snapdragon-nodetype {String}: The node.type to check for.returns {Boolean}Example
var state = { inside: {}};
var node = new Node({type: 'brace'});
var open = new Node({type: 'brace.open'});
console.log(utils.isInside(state, open, 'brace')); //=> false
utils.pushNode(node, open);
console.log(utils.isInside(state, open, 'brace')); //=> trueGet the last n element from the given array. Used for getting a node from node.nodes.
Params
array {Array}n {Number}returns {undefined}Cast the given val to an array.
Params
val {any}returns {Array}Example
console.log(utils.arraify(''));
//=> []
console.log(utils.arraify('foo'));
//=> ['foo']
console.log(utils.arraify(['foo']));
//=> ['foo']Convert the given val to a string by joining with ,. Useful for creating a cheerio/CSS/DOM-style selector from a list of strings.
Params
val {any}returns {Array}Ensure that the given value is a string and call .trim() on it, or return an empty string.
Params
str {String}returns {String}Changelog entries are classified using the following labels from keep-a-changelog:
added: for new featureschanged: for changes in existing functionalitydeprecated: for once-stable features removed in upcoming releasesremoved: for deprecated features removed in this releasefixed: for any bug fixesCustom labels used in this changelog:
dependencies: bumps dependencieshousekeeping: code re-organization, minor edits, or other changes that don’t fit in one of the other categories.Changed
.emit was renamed to .append.addNode was renamed to .pushNode.getNode was renamed to .findNode.isEmptyNodes was renamed to .isEmpty: also now works with node.nodes and/or node.valAdded
First release.
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Please read the contributing guide for advice on opening issues, pull requests, and coding standards.
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on May 01, 2017. semver(1) – The semantic versioner for npm ===========================================
As a node module:
const semver = require('semver')
semver.valid('1.2.3') // '1.2.3'
semver.valid('a.b.c') // null
semver.clean(' =v1.2.3 ') // '1.2.3'
semver.satisfies('1.2.3', '1.x || >=2.5.0 || 5.0.0 - 7.2.3') // true
semver.gt('1.2.3', '9.8.7') // false
semver.lt('1.2.3', '9.8.7') // true
semver.minVersion('>=1.0.0') // '1.0.0'
semver.valid(semver.coerce('v2')) // '2.0.0'
semver.valid(semver.coerce('42.6.7.9.3-alpha')) // '42.6.7'You can also just load the module for the function that you care about, if you’d like to minimize your footprint.
// load the whole API at once in a single object
const semver = require('semver')
// or just load the bits you need
// all of them listed here, just pick and choose what you want
// classes
const SemVer = require('semver/classes/semver')
const Comparator = require('semver/classes/comparator')
const Range = require('semver/classes/range')
// functions for working with versions
const semverParse = require('semver/functions/parse')
const semverValid = require('semver/functions/valid')
const semverClean = require('semver/functions/clean')
const semverInc = require('semver/functions/inc')
const semverDiff = require('semver/functions/diff')
const semverMajor = require('semver/functions/major')
const semverMinor = require('semver/functions/minor')
const semverPatch = require('semver/functions/patch')
const semverPrerelease = require('semver/functions/prerelease')
const semverCompare = require('semver/functions/compare')
const semverRcompare = require('semver/functions/rcompare')
const semverCompareLoose = require('semver/functions/compare-loose')
const semverCompareBuild = require('semver/functions/compare-build')
const semverSort = require('semver/functions/sort')
const semverRsort = require('semver/functions/rsort')
// low-level comparators between versions
const semverGt = require('semver/functions/gt')
const semverLt = require('semver/functions/lt')
const semverEq = require('semver/functions/eq')
const semverNeq = require('semver/functions/neq')
const semverGte = require('semver/functions/gte')
const semverLte = require('semver/functions/lte')
const semverCmp = require('semver/functions/cmp')
const semverCoerce = require('semver/functions/coerce')
// working with ranges
const semverSatisfies = require('semver/functions/satisfies')
const semverMaxSatisfying = require('semver/ranges/max-satisfying')
const semverMinSatisfying = require('semver/ranges/min-satisfying')
const semverToComparators = require('semver/ranges/to-comparators')
const semverMinVersion = require('semver/ranges/min-version')
const semverValidRange = require('semver/ranges/valid')
const semverOutside = require('semver/ranges/outside')
const semverGtr = require('semver/ranges/gtr')
const semverLtr = require('semver/ranges/ltr')
const semverIntersects = require('semver/ranges/intersects')
const simplifyRange = require('semver/ranges/simplify')
const rangeSubset = require('semver/ranges/subset')As a command-line utility:
$ semver -h
A JavaScript implementation of the https://semver.org/ specification
Usage: semver [options] <version> [<version> [...]]
Prints valid versions sorted by SemVer precedence
Options:
-r --range <range>
Print versions that match the specified range.
-i --increment [<level>]
Increment a version by the specified level. Level can
be one of: major, minor, patch, premajor, preminor,
prepatch, or prerelease. Default level is 'patch'.
Only one version may be specified.
--preid <identifier>
Identifier to be used to prefix premajor, preminor,
prepatch or prerelease version increments.
-l --loose
Interpret versions and ranges loosely
-p --include-prerelease
Always include prerelease versions in range matching
-c --coerce
Coerce a string into SemVer if possible
(does not imply --loose)
--rtl
Coerce version strings right to left
--ltr
Coerce version strings left to right (default)
Program exits successfully if any valid version satisfies
all supplied ranges, and prints all satisfying versions.
If no satisfying versions are found, then exits failure.
Versions are printed in ascending order, so supplying
multiple versions to the utility will just sort them.
A “version” is described by the v2.0.0 specification found at https://semver.org/.
A leading "=" or "v" character is stripped off and ignored.
A version range is a set of comparators which specify versions that satisfy the range.
A comparator is composed of an operator and a version. The set of primitive operators is:
< Less than<= Less than or equal to> Greater than>= Greater than or equal to= Equal. If no operator is specified, then equality is assumed, so this operator is optional, but MAY be included.For example, the comparator >=1.2.7 would match the versions 1.2.7, 1.2.8, 2.5.3, and 1.3.9, but not the versions 1.2.6 or 1.1.0.
Comparators can be joined by whitespace to form a comparator set, which is satisfied by the intersection of all of the comparators it includes.
A range is composed of one or more comparator sets, joined by ||. A version matches a range if and only if every comparator in at least one of the ||-separated comparator sets is satisfied by the version.
For example, the range >=1.2.7 <1.3.0 would match the versions 1.2.7, 1.2.8, and 1.2.99, but not the versions 1.2.6, 1.3.0, or 1.1.0.
The range 1.2.7 || >=1.2.9 <2.0.0 would match the versions 1.2.7, 1.2.9, and 1.4.6, but not the versions 1.2.8 or 2.0.0.
If a version has a prerelease tag (for example, 1.2.3-alpha.3) then it will only be allowed to satisfy comparator sets if at least one comparator with the same [major, minor, patch] tuple also has a prerelease tag.
For example, the range >1.2.3-alpha.3 would be allowed to match the version 1.2.3-alpha.7, but it would not be satisfied by 3.4.5-alpha.9, even though 3.4.5-alpha.9 is technically “greater than” 1.2.3-alpha.3 according to the SemVer sort rules. The version range only accepts prerelease tags on the 1.2.3 version. The version 3.4.5 would satisfy the range, because it does not have a prerelease flag, and 3.4.5 is greater than 1.2.3-alpha.7.
The purpose for this behavior is twofold. First, prerelease versions frequently are updated very quickly, and contain many breaking changes that are (by the author’s design) not yet fit for public consumption. Therefore, by default, they are excluded from range matching semantics.
Second, a user who has opted into using a prerelease version has clearly indicated the intent to use that specific set of alpha/beta/rc versions. By including a prerelease tag in the range, the user is indicating that they are aware of the risk. However, it is still not appropriate to assume that they have opted into taking a similar risk on the next set of prerelease versions.
Note that this behavior can be suppressed (treating all prerelease versions as if they were normal versions, for the purpose of range matching) by setting the includePrerelease flag on the options object to any functions that do range matching.
The method .inc takes an additional identifier string argument that will append the value of the string as a prerelease identifier:
command-line example:
Which then can be used to increment further:
Advanced range syntax desugars to primitive comparators in deterministic ways.
Advanced ranges may be combined in the same way as primitive comparators using white space or ||.
X.Y.Z - A.B.CSpecifies an inclusive set.
1.2.3 - 2.3.4 := >=1.2.3 <=2.3.4If a partial version is provided as the first version in the inclusive range, then the missing pieces are replaced with zeroes.
1.2 - 2.3.4 := >=1.2.0 <=2.3.4If a partial version is provided as the second version in the inclusive range, then all versions that start with the supplied parts of the tuple are accepted, but nothing that would be greater than the provided tuple parts.
1.2.3 - 2.3 := >=1.2.3 <2.4.0-01.2.3 - 2 := >=1.2.3 <3.0.0-01.2.x 1.X 1.2.* *Any of X, x, or * may be used to “stand in” for one of the numeric values in the [major, minor, patch] tuple.
* := >=0.0.0 (Any version satisfies)1.x := >=1.0.0 <2.0.0-0 (Matching major version)1.2.x := >=1.2.0 <1.3.0-0 (Matching major and minor versions)A partial version range is treated as an X-Range, so the special character is in fact optional.
"" (empty string) := * := >=0.0.01 := 1.x.x := >=1.0.0 <2.0.0-01.2 := 1.2.x := >=1.2.0 <1.3.0-0~1.2.3 ~1.2 ~1Allows patch-level changes if a minor version is specified on the comparator. Allows minor-level changes if not.
~1.2.3 := >=1.2.3 <1.(2+1).0 := >=1.2.3 <1.3.0-0~1.2 := >=1.2.0 <1.(2+1).0 := >=1.2.0 <1.3.0-0 (Same as 1.2.x)~1 := >=1.0.0 <(1+1).0.0 := >=1.0.0 <2.0.0-0 (Same as 1.x)~0.2.3 := >=0.2.3 <0.(2+1).0 := >=0.2.3 <0.3.0-0~0.2 := >=0.2.0 <0.(2+1).0 := >=0.2.0 <0.3.0-0 (Same as 0.2.x)~0 := >=0.0.0 <(0+1).0.0 := >=0.0.0 <1.0.0-0 (Same as 0.x)~1.2.3-beta.2 := >=1.2.3-beta.2 <1.3.0-0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^1.2.3 ^0.2.5 ^0.0.4Allows changes that do not modify the left-most non-zero element in the [major, minor, patch] tuple. In other words, this allows patch and minor updates for versions 1.0.0 and above, patch updates for versions 0.X >=0.1.0, and no updates for versions 0.0.X.
Many authors treat a 0.x version as if the x were the major “breaking-change” indicator.
Caret ranges are ideal when an author may make breaking changes between 0.2.4 and 0.3.0 releases, which is a common practice. However, it presumes that there will not be breaking changes between 0.2.4 and 0.2.5. It allows for changes that are presumed to be additive (but non-breaking), according to commonly observed practices.
^1.2.3 := >=1.2.3 <2.0.0-0^0.2.3 := >=0.2.3 <0.3.0-0^0.0.3 := >=0.0.3 <0.0.4-0^1.2.3-beta.2 := >=1.2.3-beta.2 <2.0.0-0 Note that prereleases in the 1.2.3 version will be allowed, if they are greater than or equal to beta.2. So, 1.2.3-beta.4 would be allowed, but 1.2.4-beta.2 would not, because it is a prerelease of a different [major, minor, patch] tuple.^0.0.3-beta := >=0.0.3-beta <0.0.4-0 Note that prereleases in the 0.0.3 version only will be allowed, if they are greater than or equal to beta. So, 0.0.3-pr.2 would be allowed.When parsing caret ranges, a missing patch value desugars to the number 0, but will allow flexibility within that value, even if the major and minor versions are both 0.
^1.2.x := >=1.2.0 <2.0.0-0^0.0.x := >=0.0.0 <0.1.0-0^0.0 := >=0.0.0 <0.1.0-0A missing minor and patch values will desugar to zero, but also allow flexibility within those values, even if the major version is zero.
^1.x := >=1.0.0 <2.0.0-0^0.x := >=0.0.0 <1.0.0-0Putting all this together, here is a Backus-Naur grammar for ranges, for the benefit of parser authors:
range-set ::= range ( logical-or range ) *
logical-or ::= ( ' ' ) * '||' ( ' ' ) *
range ::= hyphen | simple ( ' ' simple ) * | ''
hyphen ::= partial ' - ' partial
simple ::= primitive | partial | tilde | caret
primitive ::= ( '<' | '>' | '>=' | '<=' | '=' ) partial
partial ::= xr ( '.' xr ( '.' xr qualifier ? )? )?
xr ::= 'x' | 'X' | '*' | nr
nr ::= '0' | ['1'-'9'] ( ['0'-'9'] ) *
tilde ::= '~' partial
caret ::= '^' partial
qualifier ::= ( '-' pre )? ( '+' build )?
pre ::= parts
build ::= parts
parts ::= part ( '.' part ) *
part ::= nr | [-0-9A-Za-z]+
All methods and classes take a final options object argument. All options in this object are false by default. The options supported are:
loose Be more forgiving about not-quite-valid semver strings. (Any resulting output will always be 100% strict compliant, of course.) For backwards compatibility reasons, if the options argument is a boolean value instead of an object, it is interpreted to be the loose param.includePrerelease Set to suppress the default behavior of excluding prerelease tagged versions from ranges unless they are explicitly opted into.Strict-mode Comparators and Ranges will be strict about the SemVer strings that they parse.
valid(v): Return the parsed version, or null if it’s not valid.inc(v, release): Return the version incremented by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if it’s not valid
premajor in one call will bump the version up to the next major version and down to a prerelease of that major version. preminor, and prepatch work the same way.prerelease will work the same as prepatch. It increments the patch version, then makes a prerelease. If the input version is already a prerelease it simply increments it.prerelease(v): Returns an array of prerelease components, or null if none exist. Example: prerelease('1.2.3-alpha.1') -> ['alpha', 1]major(v): Return the major version number.minor(v): Return the minor version number.patch(v): Return the patch version number.intersects(r1, r2, loose): Return true if the two supplied ranges or comparators intersect.parse(v): Attempt to parse a string as a semantic version, returning either a SemVer object or null.gt(v1, v2): v1 > v2gte(v1, v2): v1 >= v2lt(v1, v2): v1 < v2lte(v1, v2): v1 <= v2eq(v1, v2): v1 == v2 This is true if they’re logically equivalent, even if they’re not the exact same string. You already know how to compare strings.neq(v1, v2): v1 != v2 The opposite of eq.cmp(v1, comparator, v2): Pass in a comparison string, and it’ll call the corresponding function above. "===" and "!==" do simple string comparison, but are included for completeness. Throws if an invalid comparison string is provided.compare(v1, v2): Return 0 if v1 == v2, or 1 if v1 is greater, or -1 if v2 is greater. Sorts in ascending order if passed to Array.sort().rcompare(v1, v2): The reverse of compare. Sorts an array of versions in descending order when passed to Array.sort().compareBuild(v1, v2): The same as compare but considers build when two versions are equal. Sorts in ascending order if passed to Array.sort(). v2 is greater. Sorts in ascending order if passed to Array.sort().diff(v1, v2): Returns difference between two versions by the release type (major, premajor, minor, preminor, patch, prepatch, or prerelease), or null if the versions are the same.intersects(comparator): Return true if the comparators intersectvalidRange(range): Return the valid range or null if it’s not validsatisfies(version, range): Return true if the version satisfies the range.maxSatisfying(versions, range): Return the highest version in the list that satisfies the range, or null if none of them do.minSatisfying(versions, range): Return the lowest version in the list that satisfies the range, or null if none of them do.minVersion(range): Return the lowest version that can possibly match the given range.gtr(version, range): Return true if version is greater than all the versions possible in the range.ltr(version, range): Return true if version is less than all the versions possible in the range.outside(version, range, hilo): Return true if the version is outside the bounds of the range in either the high or low direction. The hilo argument must be either the string '>' or '<'. (This is the function called by gtr and ltr.)intersects(range): Return true if any of the ranges comparators intersectsimplifyRange(versions, range): Return a “simplified” range that matches the same items in versions list as the range specified. Note that it does not guarantee that it would match the same versions in all cases, only for the set of versions provided. This is useful when generating ranges by joining together multiple versions with || programmatically, to provide the user with something a bit more ergonomic. If the provided range is shorter in string-length than the generated range, then that is returned.subset(subRange, superRange): Return true if the subRange range is entirely contained by the superRange range.Note that, since ranges may be non-contiguous, a version might not be greater than a range, less than a range, or satisfy a range! For example, the range 1.2 <1.2.9 || >2.0.0 would have a hole from 1.2.9 until 2.0.0, so the version 1.2.10 would not be greater than the range (because 2.0.1 satisfies, which is higher), nor less than the range (since 1.2.8 satisfies, which is lower), and it also does not satisfy the range.
If you want to know if a version satisfies or does not satisfy a range, use the satisfies(version, range) function.
coerce(version, options): Coerces a string to semver if possibleThis aims to provide a very forgiving translation of a non-semver string to semver. It looks for the first digit in a string, and consumes all remaining characters which satisfy at least a partial semver (e.g., 1, 1.2, 1.2.3) up to the max permitted length (256 characters). Longer versions are simply truncated (4.6.3.9.2-alpha2 becomes 4.6.3). All surrounding text is simply ignored (v3.4 replaces v3.3.1 becomes 3.4.0). Only text which lacks digits will fail coercion (version one is not valid). The maximum length for any semver component considered for coercion is 16 characters; longer components will be ignored (10000000000000000.4.7.4 becomes 4.7.4). The maximum value for any semver component is Number.MAX_SAFE_INTEGER || (2**53 - 1); higher value components are invalid (9999999999999999.4.7.4 is likely invalid).
If the options.rtl flag is set, then coerce will return the right-most coercible tuple that does not share an ending index with a longer coercible tuple. For example, 1.2.3.4 will return 2.3.4 in rtl mode, not 4.0.0. 1.2.3/4 will return 4.0.0, because the 4 is not a part of any other overlapping SemVer tuple.
clean(version): Clean a string to be a valid semver if possibleThis will return a cleaned and trimmed semver version. If the provided version is not valid a null will be returned. This does not work for ranges.
ex. * s.clean(' = v 2.1.5foo'): null * s.clean(' = v 2.1.5foo', { loose: true }): '2.1.5-foo' * s.clean(' = v 2.1.5-foo'): null * s.clean(' = v 2.1.5-foo', { loose: true }): '2.1.5-foo' * s.clean('=v2.1.5'): '2.1.5' * s.clean(' =v2.1.5'): 2.1.5 * s.clean(' 2.1.5 '): '2.1.5' * s.clean('~1.0.0'): null
You may pull in just the part of this semver utility that you need, if you are sensitive to packing and tree-shaking concerns. The main require('semver') export uses getter functions to lazily load the parts of the API that are used.
The following modules are available:
require('semver')require('semver/classes')require('semver/classes/comparator')require('semver/classes/range')require('semver/classes/semver')require('semver/functions/clean')require('semver/functions/cmp')require('semver/functions/coerce')require('semver/functions/compare')require('semver/functions/compare-build')require('semver/functions/compare-loose')require('semver/functions/diff')require('semver/functions/eq')require('semver/functions/gt')require('semver/functions/gte')require('semver/functions/inc')require('semver/functions/lt')require('semver/functions/lte')require('semver/functions/major')require('semver/functions/minor')require('semver/functions/neq')require('semver/functions/parse')require('semver/functions/patch')require('semver/functions/prerelease')require('semver/functions/rcompare')require('semver/functions/rsort')require('semver/functions/satisfies')require('semver/functions/sort')require('semver/functions/valid')require('semver/ranges/gtr')require('semver/ranges/intersects')require('semver/ranges/ltr')require('semver/ranges/max-satisfying')require('semver/ranges/min-satisfying')require('semver/ranges/min-version')require('semver/ranges/outside')require('semver/ranges/to-comparators')require('semver/ranges/valid')This package provides methods for traversing the file system and returning pathnames that matched a defined set of a specified pattern according to the rules used by the Unix Bash shell with some simplifications, meanwhile results are returned in arbitrary order. Quick, simple, effective.
Details
This package works in two modes, depending on the environment in which it is used.
stats option is enabled.stats option is disabled.The modern mode is faster. Learn more about the internal mechanism.
:warning: Always use forward-slashes in glob expressions (patterns and
ignoreoption). Use backslashes for escaping characters.
There is more than one form of syntax: basic and advanced. Below is a brief overview of the supported features. Also pay attention to our FAQ.
:book: This package uses a
micromatchas a library for pattern matching.
*) — matches everything except slashes (path separators), hidden files (names starting with .).**) — matches zero or more directories.?) – matches any single character except slashes (path separators).[seq]) — matches any character in sequence.:book: A few additional words about the basic matching behavior.
Some examples:
src/**/*.js — matches all files in the src directory (any level of nesting) that have the .js extension.src/*.?? — matches all files in the src directory (only first level of nesting) that have a two-character extension.file-[01].js — matches files: file-0.js, file-1.js.\\) — matching special characters ($^*+?()[]) as literals.[[:digit:]]).?(pattern-list)).{}).[1-5]).(a|b)).:book: A few additional words about the advanced matching behavior.
Some examples:
src/**/*.{css,scss} — matches all files in the src directory (any level of nesting) that have the .css or .scss extension.file-[[:digit:]].js — matches files: file-0.js, file-1.js, …, file-9.js.file-{1..3}.js — matches files: file-1.js, file-2.js, file-3.js.file-(1|2) — matches files: file-1.js, file-2.js.npm install fast-glob
Returns a Promise with an array of matching entries.
const fg = require('fast-glob');
const entries = await fg(['.editorconfig', '**/index.js'], { dot: true });
// ['.editorconfig', 'services/index.js']Returns an array of matching entries.
const fg = require('fast-glob');
const entries = fg.sync(['.editorconfig', '**/index.js'], { dot: true });
// ['.editorconfig', 'services/index.js']Returns a ReadableStream when the data event will be emitted with matching entry.
const fg = require('fast-glob');
const stream = fg.stream(['.editorconfig', '**/index.js'], { dot: true });
for await (const entry of stream) {
// .editorconfig
// services/index.js
}truestring | string[]Any correct pattern(s).
:1234: Pattern syntax
:warning: This package does not respect the order of patterns. First, all the negative patterns are applied, and only then the positive patterns. If you want to get a certain order of records, use sorting or split calls.
falseOptionsSee Options section.
generateTasks(patterns, [options])Returns the internal representation of patterns (Task is a combining patterns by base directory).
fg.generateTasks('*');
[{
base: '.', // Parent directory for all patterns inside this task
dynamic: true, // Dynamic or static patterns are in this task
patterns: ['*'],
positive: ['*'],
negative: []
}]truestring | string[]Any correct pattern(s).
falseOptionsSee Options section.
isDynamicPattern(pattern, [options])Returns true if the passed pattern is a dynamic pattern.
truestringAny correct pattern.
falseOptionsSee Options section.
escapePath(pattern)Returns a path with escaped special characters (*?|(){}[], ! at the beginning of line, @+! before the opening parenthesis).
fg.escapePath('!abc'); // \\!abc
fg.escapePath('C:/Program Files (x86)'); // C:/Program Files \\(x86\\)truestringAny string, for example, a path to a file.
numberos.cpus().lengthSpecifies the maximum number of concurrent requests from a reader to read directories.
:book: The higher the number, the higher the performance and load on the file system. If you want to read in quiet mode, set the value to a comfortable number or
1.
stringprocess.cwd()The current working directory in which to search.
numberInfinitySpecifies the maximum depth of a read directory relative to the start directory.
For example, you have the following tree:
// With base directory
fg.sync('dir/**', { onlyFiles: false, deep: 1 }); // ['dir/one']
fg.sync('dir/**', { onlyFiles: false, deep: 2 }); // ['dir/one', 'dir/one/two']
// With cwd option
fg.sync('**', { onlyFiles: false, cwd: 'dir', deep: 1 }); // ['one']
fg.sync('**', { onlyFiles: false, cwd: 'dir', deep: 2 }); // ['one', 'one/two']:book: If you specify a pattern with some base directory, this directory will not participate in the calculation of the depth of the found directories. Think of it as a
cwdoption.
booleantrueIndicates whether to traverse descendants of symbolic link directories.
:book: If the
statsoption is specified, the information about the symbolic link (fs.lstat) will be replaced with information about the entry (fs.stat) behind it.
FileSystemAdapterfs.*Custom implementation of methods for working with the file system.
export interface FileSystemAdapter {
lstat?: typeof fs.lstat;
stat?: typeof fs.stat;
lstatSync?: typeof fs.lstatSync;
statSync?: typeof fs.statSync;
readdir?: typeof fs.readdir;
readdirSync?: typeof fs.readdirSync;
}string[][]An array of glob patterns to exclude matches. This is an alternative way to use negative patterns.
fg.sync(['*.json', '!package-lock.json']); // ['package.json']
fg.sync('*.json', { ignore: ['package-lock.json'] }); // ['package.json']booleanfalseBy default this package suppress only ENOENT errors. Set to true to suppress any error.
:book: Can be useful when the directory has entries with a special level of access.
booleanfalseThrow an error when symbolic link is broken if true or safely return lstat call if false.
:book: This option has no effect on errors when reading the symbolic link directory.
booleanfalseReturn the absolute path for entries.
fg.sync('*.js', { absolute: false }); // ['index.js']
fg.sync('*.js', { absolute: true }); // ['/home/user/index.js']:book: This option is required if you want to use negative patterns with absolute path, for example,
!${__dirname}/*.js.
booleanfalseMark the directory path with the final slash.
fs.sync('*', { onlyFiles: false, markDirectories: false }); // ['index.js', 'controllers']
fs.sync('*', { onlyFiles: false, markDirectories: true }); // ['index.js', 'controllers/']booleanfalseReturns objects (instead of strings) describing entries.
fg.sync('*', { objectMode: false }); // ['src/index.js']
fg.sync('*', { objectMode: true }); // [{ name: 'index.js', path: 'src/index.js', dirent: <fs.Dirent> }]The object has the following fields:
string) — the last part of the path (basename)string) — full path relative to the pattern base directoryfs.Dirent) — instance of fs.Direct:book: An object is an internal representation of entry, so getting it does not affect performance.
booleanfalseReturn only directories.
fg.sync('*', { onlyDirectories: false }); // ['index.js', 'src']
fg.sync('*', { onlyDirectories: true }); // ['src']:book: If
true, theonlyFilesoption is automaticallyfalse.
booleantrueReturn only files.
fg.sync('*', { onlyFiles: false }); // ['index.js', 'src']
fg.sync('*', { onlyFiles: true }); // ['index.js']booleanfalseEnables an object mode with an additional field:
fs.Stats) — instance of fs.Statsfg.sync('*', { stats: false }); // ['src/index.js']
fg.sync('*', { stats: true }); // [{ name: 'index.js', path: 'src/index.js', dirent: <fs.Dirent>, stats: <fs.Stats> }]:book: Returns
fs.statinstead offs.lstatfor symbolic links when thefollowSymbolicLinksoption is specified.:warning: Unlike object mode this mode requires additional calls to the file system. On average, this mode is slower at least twice. See old and modern mode for more details.
booleantrueEnsures that the returned entries are unique.
fg.sync(['*.json', 'package.json'], { unique: false }); // ['package.json', 'package.json']
fg.sync(['*.json', 'package.json'], { unique: true }); // ['package.json']If true and similar entries are found, the result is the first found.
booleantrueEnables Bash-like brace expansion.
:1234: Syntax description or more detailed description.
fg.sync('a{b,c}d', { braceExpansion: false }); // ['a{b,c}d']
fg.sync('a{b,c}d', { braceExpansion: true }); // ['abd', 'acd']booleantrueEnables a case-sensitive mode for matching files.
fg.sync('file.txt', { caseSensitiveMatch: false }); // ['file.txt', 'File.txt']
fg.sync('file.txt', { caseSensitiveMatch: true }); // ['file.txt']booleanfalseAllow patterns to match entries that begin with a period (.).
:book: Note that an explicit dot in a portion of the pattern will always match dot files.
fg.sync('*', { dot: false }); // ['package.json']
fg.sync('*', { dot: true }); // ['.editorconfig', 'package.json']booleantrueEnables Bash-like extglob functionality.
:1234: Syntax description.
fg.sync('*.+(json|md)', { extglob: false }); // []
fg.sync('*.+(json|md)', { extglob: true }); // ['README.md', 'package.json']booleantrueEnables recursively repeats a pattern containing **. If false, ** behaves exactly like *.
fg.sync('**', { onlyFiles: false, globstar: false }); // ['a']
fg.sync('**', { onlyFiles: false, globstar: true }); // ['a', 'a/b']booleanfalseIf set to true, then patterns without slashes will be matched against the basename of the path if it contains slashes.
fg.sync('*.md', { baseNameMatch: false }); // []
fg.sync('*.md', { baseNameMatch: true }); // ['one/file.md']All patterns can be divided into two types:
file.js pattern is a static pattern because we can just verify that it exists on the file system.* pattern is a dynamic pattern because we cannot use this pattern directly.A pattern is considered dynamic if it contains the following characters (… — any characters or their absence) or options:
caseSensitiveMatch option is disabled\\ (the escape character)*, ?, ! (at the beginning of line)[…](…|…)@(…), !(…), *(…), ?(…), +(…) (respects the extglob option){…,…}, {…..…} (respects the braceExpansion option)Always use forward-slashes in glob expressions (patterns and ignore option). Use backslashes for escaping characters. With the cwd option use a convenient format.
Bad
Good
:book: Use the
normalize-pathor theunixifypackage to convert Windows-style path to a Unix-style path.
Read more about matching with backslashes.
Refers to Bash. You need to escape special characters:
Read more about matching special characters as literals.
You can use a negative pattern like this: !**/node_modules or !**/node_modules/**. Also you can use ignore option. Just look at the example below.
If you don’t want to read the second directory, you must write the following pattern: !**/second or !**/second/**.
fg.sync(['**/*.md', '!**/second']); // ['first/file.md']
fg.sync(['**/*.md'], { ignore: ['**/second/**'] }); // ['first/file.md']:warning: When you write
!**/second/**/*it means that the directory will be read, but all the entries will not be included in the results.
You have to understand that if you write the pattern to exclude directories, then the directory will not be read under any circumstances.
You cannot use Uniform Naming Convention (UNC) paths as patterns (due to syntax), but you can use them as cwd directory.
fg.sync('*', { cwd: '\\\\?\\C:\\Python27' /* or //?/C:/Python27 */ });
fg.sync('Python27/*', { cwd: '\\\\?\\C:\\' /* or //?/C:/ */ });node-glob?| node-glob | fast-glob |
|---|---|
cwd |
cwd |
root |
– |
dot |
dot |
nomount |
– |
mark |
markDirectories |
nosort |
– |
nounique |
unique |
nobrace |
braceExpansion |
noglobstar |
globstar |
noext |
extglob |
nocase |
caseSensitiveMatch |
matchBase |
baseNameMatch |
nodir |
onlyFiles |
ignore |
ignore |
follow |
followSymbolicLinks |
realpath |
– |
absolute |
absolute |
Link: Vultr Bare Metal
You can see results here for latest release.
Link: Zotac bi323
You can see results here for latest release.
See the Releases section of our GitHub project for changelog for each release version.
This is a library to generate and consume the source map format described here.
npm install source-map
var rawSourceMap = {
version: 3,
file: 'min.js',
names: ['bar', 'baz', 'n'],
sources: ['one.js', 'two.js'],
sourceRoot: 'http://example.com/www/js/',
mappings: 'CAAC,IAAI,IAAM,SAAUA,GAClB,OAAOC,IAAID;CCDb,IAAI,IAAM,SAAUE,GAClB,OAAOA'
};
var smc = new SourceMapConsumer(rawSourceMap);
console.log(smc.sources);
// [ 'http://example.com/www/js/one.js',
// 'http://example.com/www/js/two.js' ]
console.log(smc.originalPositionFor({
line: 2,
column: 28
}));
// { source: 'http://example.com/www/js/two.js',
// line: 2,
// column: 10,
// name: 'n' }
console.log(smc.generatedPositionFor({
source: 'http://example.com/www/js/two.js',
line: 2,
column: 10
}));
// { line: 2, column: 28 }
smc.eachMapping(function (m) {
// ...
});In depth guide: Compiling to JavaScript, and Debugging with Source Maps
function compile(ast) {
switch (ast.type) {
case 'BinaryExpression':
return new SourceNode(
ast.location.line,
ast.location.column,
ast.location.source,
[compile(ast.left), " + ", compile(ast.right)]
);
case 'Literal':
return new SourceNode(
ast.location.line,
ast.location.column,
ast.location.source,
String(ast.value)
);
// ...
default:
throw new Error("Bad AST");
}
}
var ast = parse("40 + 2", "add.js");
console.log(compile(ast).toStringWithSourceMap({
file: 'add.js'
}));
// { code: '40 + 2',
// map: [object SourceMapGenerator] }var map = new SourceMapGenerator({
file: "source-mapped.js"
});
map.addMapping({
generated: {
line: 10,
column: 35
},
source: "foo.js",
original: {
line: 33,
column: 2
},
name: "christopher"
});
console.log(map.toString());
// '{"version":3,"file":"source-mapped.js","sources":["foo.js"],"names":["christopher"],"mappings":";;;;;;;;;mCAgCEA"}'Get a reference to the module:
// Node.js
var sourceMap = require('source-map');
// Browser builds
var sourceMap = window.sourceMap;
// Inside Firefox
const sourceMap = require("devtools/toolkit/sourcemap/source-map.js");A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.
The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:
version: Which version of the source map spec this map is following.
sources: An array of URLs to the original source files.
names: An array of identifiers which can be referenced by individual mappings.
sourceRoot: Optional. The URL root from which all sources are relative.
sourcesContent: Optional. An array of contents of the original source files.
mappings: A string of base64 VLQs which contain the actual mappings.
file: Optional. The generated filename this source map is associated with.
Compute the last column for each generated mapping. The last column is inclusive.
// Before:
consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1 },
// { line: 2,
// column: 10 },
// { line: 2,
// column: 20 } ]
consumer.computeColumnSpans();
// After:
consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1,
// lastColumn: 9 },
// { line: 2,
// column: 10,
// lastColumn: 19 },
// { line: 2,
// column: 20,
// lastColumn: Infinity } ]Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:
line: The line number in the generated source.
column: The column number in the generated source.
bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.
and an object is returned with the following properties:
source: The original source file, or null if this information is not available.
line: The line number in the original source, or null if this information is not available.
column: The column number in the original source, or null if this information is not available.
name: The original identifier, or null if this information is not available.
consumer.originalPositionFor({ line: 2, column: 10 })
// { source: 'foo.coffee',
// line: 2,
// column: 2,
// name: null }
consumer.originalPositionFor({ line: 99999999999999999, column: 999999999999999 })
// { source: null,
// line: null,
// column: null,
// name: null }Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:
source: The filename of the original source.
line: The line number in the original source.
column: The column number in the original source.
and an object is returned with the following properties:
line: The line number in the generated source, or null.
column: The column number in the generated source, or null.
consumer.generatedPositionFor({ source: "example.js", line: 2, column: 10 })
// { line: 1,
// column: 56 }Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.
The only argument is an object with the following properties:
source: The filename of the original source.
line: The line number in the original source.
column: Optional. The column number in the original source.
and an array of objects is returned, each with the following properties:
line: The line number in the generated source, or null.
column: The column number in the generated source, or null.
consumer.allGeneratedpositionsfor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1 },
// { line: 2,
// column: 10 },
// { line: 2,
// column: 20 } ]Return true if we have the embedded source content for every source listed in the source map, false otherwise.
In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.
// ...
if (consumer.hasContentsOfAllSources()) {
consumerReadyCallback(consumer);
} else {
fetchSources(consumer, consumerReadyCallback);
}
// ...Returns the original source content for the source provided. The only argument is the URL of the original source file.
If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.
consumer.sources
// [ "my-cool-lib.clj" ]
consumer.sourceContentFor("my-cool-lib.clj")
// "..."
consumer.sourceContentFor("this is not in the source map");
// Error: "this is not in the source map" is not in the source map
consumer.sourceContentFor("this is not in the source map", true);
// nullIterate over each mapping between an original source/line/column and a generated line/column in this source map.
callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }
context: Optional. If specified, this object will be the value of this every time that callback is called.
order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.
consumer.eachMapping(function (m) { console.log(m); })
// ...
// { source: 'illmatic.js',
// generatedLine: 1,
// generatedColumn: 0,
// originalLine: 1,
// originalColumn: 0,
// name: null }
// { source: 'illmatic.js',
// generatedLine: 2,
// generatedColumn: 0,
// originalLine: 2,
// originalColumn: 0,
// name: null }
// ...An instance of the SourceMapGenerator represents a source map which is being built incrementally.
You may pass an object with the following properties:
file: The filename of the generated source that this source map is associated with.
sourceRoot: A root for all relative URLs in this source map.
skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.
var generator = new sourceMap.SourceMapGenerator({
file: "my-generated-javascript-file.js",
sourceRoot: "http://example.com/app/js/"
});Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.
sourceMapConsumer The SourceMap.Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:
generated: An object with the generated line and column positions.
original: An object with the original line and column positions.
source: The original source file (relative to the sourceRoot).
name: An optional original token name for this mapping.
generator.addMapping({
source: "module-one.scm",
original: { line: 128, column: 0 },
generated: { line: 3, column: 456 }
})Set the source content for an original source file.
sourceFile the URL of the original source file.
sourceContent the content of the source file.
Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.
sourceMapConsumer: The SourceMap to be applied.
sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.
sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.
This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.
If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)
Renders the source map being generated to a string.
generator.toString()
// '{"version":3,"sources":["module-one.scm"],"names":[],"mappings":"...snip...","file":"my-generated-javascript-file.js","sourceRoot":"http://example.com/app/js/"}'SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.
line: The original line number associated with this source node, or null if it isn’t associated with an original line.
column: The original column number associated with this source node, or null if it isn’t associated with an original column.
source: The original source’s filename; null if no filename is provided.
chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.
name: Optional. The original identifier.
var node = new SourceNode(1, 2, "a.cpp", [
new SourceNode(3, 4, "b.cpp", "extern int status;\n"),
new SourceNode(5, 6, "c.cpp", "std::string* make_string(size_t n);\n"),
new SourceNode(7, 8, "d.cpp", "int main(int argc, char** argv) {}\n"),
]);Creates a SourceNode from generated code and a SourceMapConsumer.
code: The generated code
sourceMapConsumer The SourceMap for the generated code
relativePath The optional path that relative sources in sourceMapConsumer should be relative to.
var consumer = new SourceMapConsumer(fs.readFileSync("path/to/my-file.js.map", "utf8"));
var node = SourceNode.fromStringWithSourceMap(fs.readFileSync("path/to/my-file.js"),
consumer);Add a chunk of generated JS to this source node.
chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.Prepend a chunk of generated JS to this source node.
chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.
sourceFile: The filename of the source file
sourceContent: The content of the source file
Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.
fn: The traversal function.var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.walk(function (code, loc) { console.log("WALK:", code, loc); })
// WALK: uno { source: 'b.js', line: 3, column: 4, name: null }
// WALK: dos { source: 'a.js', line: 1, column: 2, name: null }
// WALK: tres { source: 'a.js', line: 1, column: 2, name: null }
// WALK: quatro { source: 'c.js', line: 5, column: 6, name: null }Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.
fn: The traversal function.var a = new SourceNode(1, 2, "a.js", "generated from a");
a.setSourceContent("a.js", "original a");
var b = new SourceNode(1, 2, "b.js", "generated from b");
b.setSourceContent("b.js", "original b");
var c = new SourceNode(1, 2, "c.js", "generated from c");
c.setSourceContent("c.js", "original c");
var node = new SourceNode(null, null, null, [a, b, c]);
node.walkSourceContents(function (source, contents) { console.log("WALK:", source, ":", contents); })
// WALK: a.js : original a
// WALK: b.js : original b
// WALK: c.js : original cLike Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.
sep: The separator.var lhs = new SourceNode(1, 2, "a.rs", "my_copy");
var operand = new SourceNode(3, 4, "a.rs", "=");
var rhs = new SourceNode(5, 6, "a.rs", "orig.clone()");
var node = new SourceNode(null, null, null, [ lhs, operand, rhs ]);
var joinedNode = node.join(" ");Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.
pattern: The pattern to replace.
replacement: The thing to replace the pattern with.
Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.
var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.toString()
// 'unodostresquatro'Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.
The arguments are the same as those to new SourceMapGenerator.
var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.toStringWithSourceMap({ file: "my-output-file.js" })
// { code: 'unodostresquatro',
// map: [object SourceMapGenerator] }This is a library to generate and consume the source map format described here.
npm install source-map
var rawSourceMap = {
version: 3,
file: 'min.js',
names: ['bar', 'baz', 'n'],
sources: ['one.js', 'two.js'],
sourceRoot: 'http://example.com/www/js/',
mappings: 'CAAC,IAAI,IAAM,SAAUA,GAClB,OAAOC,IAAID;CCDb,IAAI,IAAM,SAAUE,GAClB,OAAOA'
};
var smc = new SourceMapConsumer(rawSourceMap);
console.log(smc.sources);
// [ 'http://example.com/www/js/one.js',
// 'http://example.com/www/js/two.js' ]
console.log(smc.originalPositionFor({
line: 2,
column: 28
}));
// { source: 'http://example.com/www/js/two.js',
// line: 2,
// column: 10,
// name: 'n' }
console.log(smc.generatedPositionFor({
source: 'http://example.com/www/js/two.js',
line: 2,
column: 10
}));
// { line: 2, column: 28 }
smc.eachMapping(function (m) {
// ...
});In depth guide: Compiling to JavaScript, and Debugging with Source Maps
function compile(ast) {
switch (ast.type) {
case 'BinaryExpression':
return new SourceNode(
ast.location.line,
ast.location.column,
ast.location.source,
[compile(ast.left), " + ", compile(ast.right)]
);
case 'Literal':
return new SourceNode(
ast.location.line,
ast.location.column,
ast.location.source,
String(ast.value)
);
// ...
default:
throw new Error("Bad AST");
}
}
var ast = parse("40 + 2", "add.js");
console.log(compile(ast).toStringWithSourceMap({
file: 'add.js'
}));
// { code: '40 + 2',
// map: [object SourceMapGenerator] }var map = new SourceMapGenerator({
file: "source-mapped.js"
});
map.addMapping({
generated: {
line: 10,
column: 35
},
source: "foo.js",
original: {
line: 33,
column: 2
},
name: "christopher"
});
console.log(map.toString());
// '{"version":3,"file":"source-mapped.js","sources":["foo.js"],"names":["christopher"],"mappings":";;;;;;;;;mCAgCEA"}'Get a reference to the module:
// Node.js
var sourceMap = require('source-map');
// Browser builds
var sourceMap = window.sourceMap;
// Inside Firefox
const sourceMap = require("devtools/toolkit/sourcemap/source-map.js");A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.
The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:
version: Which version of the source map spec this map is following.
sources: An array of URLs to the original source files.
names: An array of identifiers which can be referenced by individual mappings.
sourceRoot: Optional. The URL root from which all sources are relative.
sourcesContent: Optional. An array of contents of the original source files.
mappings: A string of base64 VLQs which contain the actual mappings.
file: Optional. The generated filename this source map is associated with.
Compute the last column for each generated mapping. The last column is inclusive.
// Before:
consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1 },
// { line: 2,
// column: 10 },
// { line: 2,
// column: 20 } ]
consumer.computeColumnSpans();
// After:
consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1,
// lastColumn: 9 },
// { line: 2,
// column: 10,
// lastColumn: 19 },
// { line: 2,
// column: 20,
// lastColumn: Infinity } ]Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:
line: The line number in the generated source.
column: The column number in the generated source.
bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.
and an object is returned with the following properties:
source: The original source file, or null if this information is not available.
line: The line number in the original source, or null if this information is not available.
column: The column number in the original source, or null if this information is not available.
name: The original identifier, or null if this information is not available.
consumer.originalPositionFor({ line: 2, column: 10 })
// { source: 'foo.coffee',
// line: 2,
// column: 2,
// name: null }
consumer.originalPositionFor({ line: 99999999999999999, column: 999999999999999 })
// { source: null,
// line: null,
// column: null,
// name: null }Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:
source: The filename of the original source.
line: The line number in the original source.
column: The column number in the original source.
and an object is returned with the following properties:
line: The line number in the generated source, or null.
column: The column number in the generated source, or null.
consumer.generatedPositionFor({ source: "example.js", line: 2, column: 10 })
// { line: 1,
// column: 56 }Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.
The only argument is an object with the following properties:
source: The filename of the original source.
line: The line number in the original source.
column: Optional. The column number in the original source.
and an array of objects is returned, each with the following properties:
line: The line number in the generated source, or null.
column: The column number in the generated source, or null.
consumer.allGeneratedpositionsfor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1 },
// { line: 2,
// column: 10 },
// { line: 2,
// column: 20 } ]Return true if we have the embedded source content for every source listed in the source map, false otherwise.
In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.
// ...
if (consumer.hasContentsOfAllSources()) {
consumerReadyCallback(consumer);
} else {
fetchSources(consumer, consumerReadyCallback);
}
// ...Returns the original source content for the source provided. The only argument is the URL of the original source file.
If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.
consumer.sources
// [ "my-cool-lib.clj" ]
consumer.sourceContentFor("my-cool-lib.clj")
// "..."
consumer.sourceContentFor("this is not in the source map");
// Error: "this is not in the source map" is not in the source map
consumer.sourceContentFor("this is not in the source map", true);
// nullIterate over each mapping between an original source/line/column and a generated line/column in this source map.
callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }
context: Optional. If specified, this object will be the value of this every time that callback is called.
order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.
consumer.eachMapping(function (m) { console.log(m); })
// ...
// { source: 'illmatic.js',
// generatedLine: 1,
// generatedColumn: 0,
// originalLine: 1,
// originalColumn: 0,
// name: null }
// { source: 'illmatic.js',
// generatedLine: 2,
// generatedColumn: 0,
// originalLine: 2,
// originalColumn: 0,
// name: null }
// ...An instance of the SourceMapGenerator represents a source map which is being built incrementally.
You may pass an object with the following properties:
file: The filename of the generated source that this source map is associated with.
sourceRoot: A root for all relative URLs in this source map.
skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.
var generator = new sourceMap.SourceMapGenerator({
file: "my-generated-javascript-file.js",
sourceRoot: "http://example.com/app/js/"
});Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.
sourceMapConsumer The SourceMap.Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:
generated: An object with the generated line and column positions.
original: An object with the original line and column positions.
source: The original source file (relative to the sourceRoot).
name: An optional original token name for this mapping.
generator.addMapping({
source: "module-one.scm",
original: { line: 128, column: 0 },
generated: { line: 3, column: 456 }
})Set the source content for an original source file.
sourceFile the URL of the original source file.
sourceContent the content of the source file.
Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.
sourceMapConsumer: The SourceMap to be applied.
sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.
sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.
This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.
If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)
Renders the source map being generated to a string.
generator.toString()
// '{"version":3,"sources":["module-one.scm"],"names":[],"mappings":"...snip...","file":"my-generated-javascript-file.js","sourceRoot":"http://example.com/app/js/"}'SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.
line: The original line number associated with this source node, or null if it isn’t associated with an original line.
column: The original column number associated with this source node, or null if it isn’t associated with an original column.
source: The original source’s filename; null if no filename is provided.
chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.
name: Optional. The original identifier.
var node = new SourceNode(1, 2, "a.cpp", [
new SourceNode(3, 4, "b.cpp", "extern int status;\n"),
new SourceNode(5, 6, "c.cpp", "std::string* make_string(size_t n);\n"),
new SourceNode(7, 8, "d.cpp", "int main(int argc, char** argv) {}\n"),
]);Creates a SourceNode from generated code and a SourceMapConsumer.
code: The generated code
sourceMapConsumer The SourceMap for the generated code
relativePath The optional path that relative sources in sourceMapConsumer should be relative to.
var consumer = new SourceMapConsumer(fs.readFileSync("path/to/my-file.js.map", "utf8"));
var node = SourceNode.fromStringWithSourceMap(fs.readFileSync("path/to/my-file.js"),
consumer);Add a chunk of generated JS to this source node.
chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.Prepend a chunk of generated JS to this source node.
chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.
sourceFile: The filename of the source file
sourceContent: The content of the source file
Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.
fn: The traversal function.var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.walk(function (code, loc) { console.log("WALK:", code, loc); })
// WALK: uno { source: 'b.js', line: 3, column: 4, name: null }
// WALK: dos { source: 'a.js', line: 1, column: 2, name: null }
// WALK: tres { source: 'a.js', line: 1, column: 2, name: null }
// WALK: quatro { source: 'c.js', line: 5, column: 6, name: null }Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.
fn: The traversal function.var a = new SourceNode(1, 2, "a.js", "generated from a");
a.setSourceContent("a.js", "original a");
var b = new SourceNode(1, 2, "b.js", "generated from b");
b.setSourceContent("b.js", "original b");
var c = new SourceNode(1, 2, "c.js", "generated from c");
c.setSourceContent("c.js", "original c");
var node = new SourceNode(null, null, null, [a, b, c]);
node.walkSourceContents(function (source, contents) { console.log("WALK:", source, ":", contents); })
// WALK: a.js : original a
// WALK: b.js : original b
// WALK: c.js : original cLike Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.
sep: The separator.var lhs = new SourceNode(1, 2, "a.rs", "my_copy");
var operand = new SourceNode(3, 4, "a.rs", "=");
var rhs = new SourceNode(5, 6, "a.rs", "orig.clone()");
var node = new SourceNode(null, null, null, [ lhs, operand, rhs ]);
var joinedNode = node.join(" ");Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.
pattern: The pattern to replace.
replacement: The thing to replace the pattern with.
Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.
var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.toString()
// 'unodostresquatro'Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.
The arguments are the same as those to new SourceMapGenerator.
var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.toStringWithSourceMap({ file: "my-output-file.js" })
// { code: 'unodostresquatro',
// map: [object SourceMapGenerator] }Bash-like brace expansion, implemented in JavaScript. Safer than other brace expansion libs, with complete support for the Bash 4.3 braces specification, without sacrificing speed.
Please consider following this project’s author, Jon Schlinkert, and consider starring the project to show your :heart: and support.
Install with npm:
Brace patterns are great for matching ranges. Users (and implementors) shouldn’t have to think about whether or not they will break their application (or yours) from accidentally defining an aggressive brace pattern. Braces is the only library that offers a solution to this problem.
The main export is a function that takes one or more brace patterns and options.
By default, braces returns an optimized regex-source string. To get an array of brace patterns, use brace.expand().
The following section explains the difference in more detail. (If you’re curious about “why” braces does this by default, see brace matching pitfalls.
Optimized
By default, patterns are optimized for regex and matching:
Expanded
To expand patterns the same way as Bash or minimatch, use the .expand method:
Or use options.expand:
Uses fill-range for expanding alphabetical or numeric lists:
console.log(braces('a/{foo,bar,baz}/*.js'));
//=> ['a/(foo|bar|baz)/*.js']
console.log(braces.expand('a/{foo,bar,baz}/*.js'));
//=> ['a/foo/*.js', 'a/bar/*.js', 'a/baz/*.js']Uses fill-range for expanding alphabetical or numeric ranges (bash “sequences”):
console.log(braces.expand('{1..3}')); // ['1', '2', '3']
console.log(braces.expand('a{01..03}b')); // ['a01b', 'a02b', 'a03b']
console.log(braces.expand('a{1..3}b')); // ['a1b', 'a2b', 'a3b']
console.log(braces.expand('{a..c}')); // ['a', 'b', 'c']
console.log(braces.expand('foo/{a..c}')); // ['foo/a', 'foo/b', 'foo/c']
// supports padded ranges
console.log(braces('a{01..03}b')); //=> [ 'a(0[1-3])b' ]
console.log(braces('a{001..300}b')); //=> [ 'a(0{2}[1-9]|0[1-9][0-9]|[12][0-9]{2}|300)b' ]Steps, or increments, may be used with ranges:
console.log(braces.expand('{2..10..2}'));
//=> ['2', '4', '6', '8', '10']
console.log(braces('{2..10..2}'));
//=> ['(2|4|6|8|10)']When the .optimize method is used, or options.optimize is set to true, sequences are passed to to-regex-range for expansion.
Brace patterns may be nested. The results of each expanded string are not sorted, and left to right order is preserved.
“Expanded” braces
console.log(braces.expand('a{b,c,/{x,y}}/e'));
//=> ['ab/e', 'ac/e', 'a/x/e', 'a/y/e']
console.log(braces.expand('a/{x,{1..5},y}/c'));
//=> ['a/x/c', 'a/1/c', 'a/2/c', 'a/3/c', 'a/4/c', 'a/5/c', 'a/y/c']“Optimized” braces
console.log(braces('a{b,c,/{x,y}}/e'));
//=> ['a(b|c|/(x|y))/e']
console.log(braces('a/{x,{1..5},y}/c'));
//=> ['a/(x|([1-5])|y)/c']Escaping braces
A brace pattern will not be expanded or evaluted if either the opening or closing brace is escaped:
console.log(braces.expand('a\\{d,c,b}e'));
//=> ['a{d,c,b}e']
console.log(braces.expand('a{d,c,b\\}e'));
//=> ['a{d,c,b}e']Escaping commas
Commas inside braces may also be escaped:
console.log(braces.expand('a{b\\,c}d'));
//=> ['a{b,c}d']
console.log(braces.expand('a{d\\,c,b}e'));
//=> ['ad,ce', 'abe']Single items
Following bash conventions, a brace pattern is also not expanded when it contains a single character:
Type: Number
Default: 65,536
Description: Limit the length of the input string. Useful when the input string is generated or your application allows users to pass a string, et cetera.
Type: Boolean
Default: undefined
Description: Generate an “expanded” brace pattern (this option is unncessary with the .expand method, which does the same thing).
Type: Boolean
Default: true
Description: Enabled by default.
Type: Boolean
Default: true
Description: Duplicates are removed by default. To keep duplicates, pass {nodupes: false} on the options
Type: Number
Default: 250
Description: When braces.expand() is used, or options.expand is true, brace patterns will automatically be optimized when the difference between the range minimum and range maximum exceeds the rangeLimit. This is to prevent huge ranges from freezing your application.
You can set this to any number, or change options.rangeLimit to Inifinity to disable this altogether.
Examples
// pattern exceeds the "rangeLimit", so it's optimized automatically
console.log(braces.expand('{1..1000}'));
//=> ['([1-9]|[1-9][0-9]{1,2}|1000)']
// pattern does not exceed "rangeLimit", so it's NOT optimized
console.log(braces.expand('{1..100}'));
//=> ['1', '2', '3', '4', '5', '6', '7', '8', '9', '10', '11', '12', '13', '14', '15', '16', '17', '18', '19', '20', '21', '22', '23', '24', '25', '26', '27', '28', '29', '30', '31', '32', '33', '34', '35', '36', '37', '38', '39', '40', '41', '42', '43', '44', '45', '46', '47', '48', '49', '50', '51', '52', '53', '54', '55', '56', '57', '58', '59', '60', '61', '62', '63', '64', '65', '66', '67', '68', '69', '70', '71', '72', '73', '74', '75', '76', '77', '78', '79', '80', '81', '82', '83', '84', '85', '86', '87', '88', '89', '90', '91', '92', '93', '94', '95', '96', '97', '98', '99', '100']Type: Function
Default: undefined
Description: Customize range expansion.
var range = braces.expand('x{a..e}y', {
transform: function(str) {
return 'foo' + str;
}
});
console.log(range);
//=> [ 'xfooay', 'xfooby', 'xfoocy', 'xfoody', 'xfooey' ]Type: Boolean
Default: undefined
Description: In regular expressions, quanitifiers can be used to specify how many times a token can be repeated. For example, a{1,3} will match the letter a one to three times.
Unfortunately, regex quantifiers happen to share the same syntax as Bash lists
The quantifiers option tells braces to detect when regex quantifiers are defined in the given pattern, and not to try to expand them as lists.
Examples
var braces = require('braces');
console.log(braces('a/b{1,3}/{x,y,z}'));
//=> [ 'a/b(1|3)/(x|y|z)' ]
console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true}));
//=> [ 'a/b{1,3}/(x|y|z)' ]
console.log(braces('a/b{1,3}/{x,y,z}', {quantifiers: true, expand: true}));
//=> [ 'a/b{1,3}/x', 'a/b{1,3}/y', 'a/b{1,3}/z' ]Type: Boolean
Default: undefined
Description: Strip backslashes that were used for escaping from the result.
Brace expansion is a type of parameter expansion that was made popular by unix shells for generating lists of strings, as well as regex-like matching when used alongside wildcards (globs).
In addition to “expansion”, braces are also used for matching. In other words:
More about brace expansion (click to expand)
There are two main types of brace expansion:
{a,b,c}a{1..3}b. Optionally, a third argument may be passed to define a “step” or increment to use: a{1..100..10}b. These are also sometimes referred to as “ranges”.Here are some example brace patterns to illustrate how they work:
Sets
{a,b,c} => a b c
{a,b,c}{1,2} => a1 a2 b1 b2 c1 c2
Sequences
{1..9} => 1 2 3 4 5 6 7 8 9
{4..-4} => 4 3 2 1 0 -1 -2 -3 -4
{1..20..3} => 1 4 7 10 13 16 19
{a..j} => a b c d e f g h i j
{j..a} => j i h g f e d c b a
{a..z..3} => a d g j m p s v y
Combination
Sets and sequences can be mixed together or used along with any other strings.
{a,b,c}{1..3} => a1 a2 a3 b1 b2 b3 c1 c2 c3
foo/{a,b,c}/bar => foo/a/bar foo/b/bar foo/c/bar
The fact that braces can be “expanded” from relatively simple patterns makes them ideal for quickly generating test fixtures, file paths, and similar use cases.
In addition to expansion, brace patterns are also useful for performing regular-expression-like matching.
For example, the pattern foo/{1..3}/bar would match any of following strings:
foo/1/bar
foo/2/bar
foo/3/bar
But not:
baz/1/qux
baz/2/qux
baz/3/qux
Braces can also be combined with glob patterns to perform more advanced wildcard matching. For example, the pattern */{1..3}/* would match any of following strings:
foo/1/bar
foo/2/bar
foo/3/bar
baz/1/qux
baz/2/qux
baz/3/qux
Although brace patterns offer a user-friendly way of matching ranges or sets of strings, there are also some major disadvantages and potential risks you should be aware of.
“brace bombs”
For a more detailed explanation with examples, see the geometric complexity section.
Jump to the performance section to see how Braces solves this problem in comparison to other libraries.
At minimum, brace patterns with sets limited to two elements have quadradic or O(n^2) complexity. But the complexity of the algorithm increases exponentially as the number of sets, and elements per set, increases, which is O(n^c).
For example, the following sets demonstrate quadratic (O(n^2)) complexity:
{1,2}{3,4} => (2X2) => 13 14 23 24
{1,2}{3,4}{5,6} => (2X2X2) => 135 136 145 146 235 236 245 246
But add an element to a set, and we get a n-fold Cartesian product with O(n^c) complexity:
{1,2,3}{4,5,6}{7,8,9} => (3X3X3) => 147 148 149 157 158 159 167 168 169 247 248
249 257 258 259 267 268 269 347 348 349 357
358 359 367 368 369
Now, imagine how this complexity grows given that each element is a n-tuple:
{1..100}{1..100} => (100X100) => 10,000 elements (38.4 kB)
{1..100}{1..100}{1..100} => (100X100X100) => 1,000,000 elements (5.76 MB)
Although these examples are clearly contrived, they demonstrate how brace patterns can quickly grow out of control.
More information
Interested in learning more about brace expansion?
Braces is not only screaming fast, it’s also more accurate the other brace expansion libraries.
Fortunately there is a solution to the “brace bomb” problem: don’t expand brace patterns into an array when they’re used for matching.
Instead, convert the pattern into an optimized regular expression. This is easier said than done, and braces is the only library that does this currently.
The proof is in the numbers
Minimatch gets exponentially slower as patterns increase in complexity, braces does not. The following results were generated using braces() and minimatch.braceExpand(), respectively.
| Pattern | braces | minimatch |
|---|---|---|
{1..9007199254740991}1 |
298 B (5ms 459μs) |
N/A (freezes) |
{1..1000000000000000} |
41 B (1ms 15μs) |
N/A (freezes) |
{1..100000000000000} |
40 B (890μs) |
N/A (freezes) |
{1..10000000000000} |
39 B (2ms 49μs) |
N/A (freezes) |
{1..1000000000000} |
38 B (608μs) |
N/A (freezes) |
{1..100000000000} |
37 B (397μs) |
N/A (freezes) |
{1..10000000000} |
35 B (983μs) |
N/A (freezes) |
{1..1000000000} |
34 B (798μs) |
N/A (freezes) |
{1..100000000} |
33 B (733μs) |
N/A (freezes) |
{1..10000000} |
32 B (5ms 632μs) |
78.89 MB (16s 388ms 569μs) |
{1..1000000} |
31 B (1ms 381μs) |
6.89 MB (1s 496ms 887μs) |
{1..100000} |
30 B (950μs) |
588.89 kB (146ms 921μs) |
{1..10000} |
29 B (1ms 114μs) |
48.89 kB (14ms 187μs) |
{1..1000} |
28 B (760μs) |
3.89 kB (1ms 453μs) |
{1..100} |
22 B (345μs) |
291 B (196μs) |
{1..10} |
10 B (533μs) |
20 B (37μs) |
{1..3} |
7 B (190μs) |
5 B (27μs) |
When you need expansion, braces is still much faster.
(the following results were generated using braces.expand() and minimatch.braceExpand(), respectively)
| Pattern | braces | minimatch |
|---|---|---|
{1..10000000} |
78.89 MB (2s 698ms 642μs) |
78.89 MB (18s 601ms 974μs) |
{1..1000000} |
6.89 MB (458ms 576μs) |
6.89 MB (1s 491ms 621μs) |
{1..100000} |
588.89 kB (20ms 728μs) |
588.89 kB (156ms 919μs) |
{1..10000} |
48.89 kB (2ms 202μs) |
48.89 kB (13ms 641μs) |
{1..1000} |
3.89 kB (1ms 796μs) |
3.89 kB (1ms 958μs) |
{1..100} |
291 B (424μs) |
291 B (211μs) |
{1..10} |
20 B (487μs) |
20 B (72μs) |
{1..3} |
5 B (166μs) |
5 B (27μs) |
If you’d like to run these comparisons yourself, see test/support/generate.js.
Install dev dependencies:
Benchmarking: (8 of 8)
· combination-nested
· combination
· escaped
· list-basic
· list-multiple
· no-braces
· sequence-basic
· sequence-multiple
# benchmark/fixtures/combination-nested.js (52 bytes)
brace-expansion x 4,756 ops/sec ±1.09% (86 runs sampled)
braces x 11,202,303 ops/sec ±1.06% (88 runs sampled)
minimatch x 4,816 ops/sec ±0.99% (87 runs sampled)
fastest is braces
# benchmark/fixtures/combination.js (51 bytes)
brace-expansion x 625 ops/sec ±0.87% (87 runs sampled)
braces x 11,031,884 ops/sec ±0.72% (90 runs sampled)
minimatch x 637 ops/sec ±0.84% (88 runs sampled)
fastest is braces
# benchmark/fixtures/escaped.js (44 bytes)
brace-expansion x 163,325 ops/sec ±1.05% (87 runs sampled)
braces x 10,655,071 ops/sec ±1.22% (88 runs sampled)
minimatch x 147,495 ops/sec ±0.96% (88 runs sampled)
fastest is braces
# benchmark/fixtures/list-basic.js (40 bytes)
brace-expansion x 99,726 ops/sec ±1.07% (83 runs sampled)
braces x 10,596,584 ops/sec ±0.98% (88 runs sampled)
minimatch x 100,069 ops/sec ±1.17% (86 runs sampled)
fastest is braces
# benchmark/fixtures/list-multiple.js (52 bytes)
brace-expansion x 34,348 ops/sec ±1.08% (88 runs sampled)
braces x 9,264,131 ops/sec ±1.12% (88 runs sampled)
minimatch x 34,893 ops/sec ±0.87% (87 runs sampled)
fastest is braces
# benchmark/fixtures/no-braces.js (48 bytes)
brace-expansion x 275,368 ops/sec ±1.18% (89 runs sampled)
braces x 9,134,677 ops/sec ±0.95% (88 runs sampled)
minimatch x 3,755,954 ops/sec ±1.13% (89 runs sampled)
fastest is braces
# benchmark/fixtures/sequence-basic.js (41 bytes)
brace-expansion x 5,492 ops/sec ±1.35% (87 runs sampled)
braces x 8,485,034 ops/sec ±1.28% (89 runs sampled)
minimatch x 5,341 ops/sec ±1.17% (87 runs sampled)
fastest is braces
# benchmark/fixtures/sequence-multiple.js (51 bytes)
brace-expansion x 116 ops/sec ±0.77% (77 runs sampled)
braces x 9,445,118 ops/sec ±1.32% (84 runs sampled)
minimatch x 109 ops/sec ±1.16% (76 runs sampled)
fastest is bracesContributing
Pull requests and stars are always welcome. For bugs and feature requests, please create an issue.
Running Tests
Running and reviewing unit tests is a great way to get familiarized with a library and its API. You can install dependencies and run tests with the following command:
Building docs
(This project’s readme.md is generated by verb, please don’t edit the readme directly. Any changes to the readme must be made in the .verb.md readme template.)
To generate the readme, run the following command:
You might also be interested in these projects:
step to… more | homepage| Commits | Contributor |
|---|---|
| 188 | jonschlinkert |
| 4 | doowb |
| 1 | es128 |
| 1 | eush77 |
| 1 | hemanth |
Jon Schlinkert
This file was generated by verb-generate-readme, v0.6.0, on February 17, 2018.
this is the largest safe integer allowed in JavaScript. ↩
This library is a minimalist collection of functions for manipulating JS date and time. It’s tiny, simple, easy to learn.
JS modules nowadays are getting more huge and complex, and there are also many dependencies. Trying to keep each module simple and small is meaningful.
npm install date-and-time --save
[] are not validated.Feature Freeze
We decided to freeze the feature with this version (except the following). The next will be 1.0.0.
To support ES Modules (without transpile) in the next version, the importing method has changed in the locale() and the plugin(). As this version you will see the warning message if using the old method. See LOCALE.md and PLUGINS.md for details.
Added transform() function to transform the format of a date string. When changing the format, previously you would convert the date string to a date object with the parse(), and then format it with the format() again, but you can now do this with a single function.
format() now supports a compiled formatString.const pattern = date.compile('MMM D YYYY');
date.format(new Date(2020, 2, 3), pattern); // => Mar 3 2020
date.format(new Date(2020, 3, 4), pattern); // => Apr 4 2020
date.format(new Date(2020, 4, 5), pattern); // => May 5 2020parse() now supports ... (ellipsis) token. The preparse() and the isValid() are too.// Cannot write like this even if you want to get only a date part.
date.parse('Mar 05 2020 10:42:29 GMT-0800', 'MMM D YYYY'); // => Invalid Date
// Previously, it was necessary to adjust the length of the format string by appending white spaces of the same length as a part to ignore.
date.parse('Mar 05 2020 10:42:29 GMT-0800', 'MMM D YYYY ');
// Can write simply like this using the ellipsis token.
date.parse('Mar 05 2020 10:42:29 GMT-0800', 'MMM D YYYY...');day-of-week plugin for the parser. However this is a dummy, not effective at all. See PLUGINS.md for details.// If a date string has day of week at the head, cannot parse it unless remove that part from it or fill white spaces that part of the format string.
date.parse('Thu Mar 05 2020 10:42:29 GMT-0800', ' MMM D YYYY...');
// This plugin provides `dd`, `ddd` and `dddd` tokens for such a case. However they are not effective at all because day of week has not information to identify a date.
date.parse('Thu Mar 05 2020 10:42:29 GMT-0800', 'ddd MMM D YYYY...');subtract() now returns a REAL number. Previously, it returned values with truncated decimals.const now = new Date(2020, 2, 5, 1, 2, 3, 4);
const new_years_day = new Date(2020, 0, 1);
date.subtract(now, new_years_day).toDays(); // => 64.04309032407407timespan plugin. This plugin provides timeSpan() function to display a formatted elapsed time. This will might be integrated with the subtract(). See PLUGINS.md for details.const now = new Date(2020, 2, 5, 1, 2, 3, 4);
const new_years_day = new Date(2020, 0, 1);
date.timeSpan(now, new_years_day).toDays('D HH:mm:ss.SSS'); // => '64 01:02:03.004'
date.timeSpan(now, new_years_day).toHours('H [hours] m [minutes] s [seconds]'); // => '1537 hours 2 minutes 3 seconds'microsecond plugin for the parser. Microsecond is not supported by date objects so that it is rounded millisecond at the inside. See PLUGINS.md for details.const now = new Date();
date.format(now, 'YYYY/MM/DD HH:mm:ss'); // => '2015/01/02 23:14:05'
date.format(now, 'ddd, MMM DD YYYY'); // => 'Fri, Jan 02 2015'
date.format(now, 'hh:mm A [GMT]Z'); // => '11:14 PM GMT-0800'
date.format(now, 'hh:mm A [GMT]Z', true); // => '07:14 AM GMT+0000'
const pattern = date.compile('ddd, MMM DD YYYY');
date.format(now, pattern); // => 'Fri, Jan 02 2015'Available tokens and their meanings are as follows:
| token | meaning | examples of output |
|---|---|---|
| YYYY | four-digit year | 0999, 2015 |
| YY | two-digit year | 99, 01, 15 |
| Y | four-digit year without zero-padding | 2, 44, 888, 2015 |
| MMMM | month name (long) | January, December |
| MMM | month name (short) | Jan, Dec |
| MM | month with zero-padding | 01, 12 |
| M | month | 1, 12 |
| DD | date with zero-padding | 02, 31 |
| D | date | 2, 31 |
| dddd | day of week (long) | Friday, Sunday |
| ddd | day of week (short) | Fri, Sun |
| dd | day of week (very short) | Fr, Su |
| HH | 24-hour with zero-padding | 23, 08 |
| H | 24-hour | 23, 8 |
| hh | 12-hour with zero-padding | 11, 08 |
| h | 12-hour | 11, 8 |
| A | meridiem (uppercase) | AM, PM |
| mm | minute with zero-padding | 14, 07 |
| m | minute | 14, 7 |
| ss | second with zero-padding | 05, 10 |
| s | second | 5, 10 |
| SSS | millisecond (high accuracy) | 753, 022 |
| SS | millisecond (middle accuracy) | 75, 02 |
| S | millisecond (low accuracy) | 7, 0 |
| Z | timezone offset | +0100, -0800 |
You can also use the following tokens by importing plugins. See PLUGINS.md for details.
| token | meaning | examples of output |
|---|---|---|
| DDD | ordinal notation of date | 1st, 2nd, 3rd |
| AA | meridiem (uppercase with ellipsis) | A.M., P.M. |
| a | meridiem (lowercase) | am, pm |
| aa | meridiem (lowercase with ellipsis) | a.m., p.m. |
String in parenthese [...] in the formatString will be ignored as comments:
date.format(new Date(), 'DD-[MM]-YYYY'); // => '02-MM-2015'
date.format(new Date(), '[DD-[MM]-YYYY]'); // => 'DD-[MM]-YYYY'This function usually outputs a local date-time string. Set to true the utc option (the 3rd parameter) if you would like to get a UTC date-time string.
date.format(new Date(), 'hh:mm A [GMT]Z'); // => '11:14 PM GMT-0800'
date.format(new Date(), 'hh:mm A [GMT]Z', true); // => '07:14 AM GMT+0000'You can also define your own tokens. See EXTEND.md for details.
date.parse('2015/01/02 23:14:05', 'YYYY/MM/DD HH:mm:ss'); // => Jan 2 2015 23:14:05 GMT-0800
date.parse('02-01-2015', 'DD-MM-YYYY'); // => Jan 2 2015 00:00:00 GMT-0800
date.parse('11:14:05 PM', 'hh:mm:ss A'); // => Jan 1 1970 23:14:05 GMT-0800
date.parse('11:14:05 PM', 'hh:mm:ss A', true); // => Jan 1 1970 23:14:05 GMT+0000 (Jan 1 1970 15:14:05 GMT-0800)
date.parse('23:14:05 GMT+0900', 'HH:mm:ss [GMT]Z'); // => Jan 1 1970 23:14:05 GMT+0900 (Jan 1 1970 06:14:05 GMT-0800)
date.parse('Jam 1 2017', 'MMM D YYYY'); // => Invalid Date
date.parse('Feb 29 2017', 'MMM D YYYY'); // => Invalid DateAvailable tokens and their meanings are as follows:
| token | meaning | examples of acceptable form |
|---|---|---|
| YYYY | four-digit year | 0999, 2015 |
| Y | four-digit year without zero-padding | 2, 44, 88, 2015 |
| MMMM | month name (long) | January, December |
| MMM | month name (short) | Jan, Dec |
| MM | month with zero-padding | 01, 12 |
| M | month | 1, 12 |
| DD | date with zero-padding | 02, 31 |
| D | date | 2, 31 |
| HH | 24-hour with zero-padding | 23, 08 |
| H | 24-hour | 23, 8 |
| hh | 12-hour with zero-padding | 11, 08 |
| h | 12-hour | 11, 8 |
| A | meridiem (uppercase) | AM, PM |
| mm | minute with zero-padding | 14, 07 |
| m | minute | 14, 7 |
| ss | second with zero-padding | 05, 10 |
| s | second | 5, 10 |
| SSS | millisecond (high accuracy) | 753, 022 |
| SS | millisecond (middle accuracy) | 75, 02 |
| S | millisecond (low accuracy) | 7, 0 |
| Z | timezone offset | +0100, -0800 |
You can also use the following tokens by importing plugins. See PLUGINS.md for details.
| token | meaning | examples of acceptable form |
|---|---|---|
| YY | two-digit year | 90, 00, 08, 19 |
| Y | two-digit year without zero-padding | 90, 0, 8, 19 |
| A | meridiem | AM, PM, A.M., P.M., am, pm, a.m., p.m. |
| dddd | day of week (long) | Friday, Sunday |
| ddd | day of week (short) | Fri, Sun |
| dd | day of week (very short) | Fr, Su |
| SSSSSS | microsecond (high accuracy) | 123456, 000001 |
| SSSSS | microsecond (middle accuracy) | 12345, 00001 |
| SSSS | microsecond (low accuracy) | 1234, 0001 |
If the function fails to parse, it will return Invalid Date. Notice that the Invalid Date is a Date object, not NaN or null. You can tell whether the Date object is invalid as follows:
This function usually assumes the dateString is a local date-time. Set to true the utc option (the 3rd parameter) if it is a UTC date-time.
date.parse('11:14:05 PM', 'hh:mm:ss A'); // => Jan 1 1970 23:14:05 GMT-0800
date.parse('11:14:05 PM', 'hh:mm:ss A', true); // => Jan 1 1970 23:14:05 GMT+0000 (Jan 1 1970 15:14:05 GMT-0800)Default date is January 1, 1970, time is 00:00:00.000. Values not passed will be complemented with them:
date.parse('11:14:05 PM', 'hh:mm:ss A'); // => Jan 1 1970 23:14:05 GMT-0800
date.parse('Feb 2000', 'MMM YYYY'); // => Feb 1 2000 00:00:00 GMT-0800Parsable maximum date is December 31, 9999, minimum date is January 1, 0001.
date.parse('Dec 31 9999', 'MMM D YYYY'); // => Dec 31 9999 00:00:00 GMT-0800
date.parse('Dec 31 10000', 'MMM D YYYY'); // => Invalid Date
date.parse('Jan 1 0001', 'MMM D YYYY'); // => Jan 1 0001 00:00:00 GMT-0800
date.parse('Jan 1 0000', 'MMM D YYYY'); // => Invalid DateIf use hh or h (12-hour) token, use together A (meridiem) token to get the right value.
date.parse('11:14:05', 'hh:mm:ss'); // => Jan 1 1970 11:14:05 GMT-0800
date.parse('11:14:05 PM', 'hh:mm:ss A'); // => Jan 1 1970 23:14:05 GMT-0800Use square brackets [] if a date-time string includes some token characters. Tokens inside square brackets in the formatString will be interpreted as normal characters:
date.parse('12 hours 34 minutes', 'HH hours mm minutes'); // => Invalid Date
date.parse('12 hours 34 minutes', 'HH [hours] mm [minutes]'); // => Jan 1 1970 12:34:00 GMT-0800A white space works as a wildcard token. This token is not interpret into anything. This means it can be ignored a specific variable string. For example, when you would like to ignore a time part from a date string, you can write as follows:
// This will be an error.
date.parse('2015/01/02 11:14:05', 'YYYY/MM/DD'); // => Invalid Date
// Adjust the length of the format string by appending white spaces of the same length as a part to ignore to the end of it.
date.parse('2015/01/02 11:14:05', 'YYYY/MM/DD '); // => Jan 2 2015 00:00:00 GMT-0800The parser supports ... (ellipse) token. The above example can also be written like this:
const pattern = date.compile('MMM D YYYY h:m:s A');
date.parse('Mar 22 2019 2:54:21 PM', pattern);
date.parse('Jul 27 2019 4:15:24 AM', pattern);
date.parse('Dec 25 2019 3:51:11 AM', pattern);
date.format(new Date(), pattern); // => Mar 16 2020 6:24:56 PMIf you are going to call the format(), the parse() or the isValid() many times with one string format, recommended to precompile and reuse it for performance.
This function takes exactly the same parameters with the parse(), but returns a date structure as follows unlike that:
date.preparse('Fri Jan 2015 02 23:14:05 GMT-0800', ' MMM YYYY DD HH:mm:ss [GMT]Z');
{
Y: 2015, // Year
M: 1, // Month
D: 2, // Day
H: 23, // 24-hour
A: 0, // Meridiem
h: 0, // 12-hour
m: 14, // Minute
s: 5, // Second
S: 0, // Millisecond
Z: 480, // Timsezone offset
_index: 33, // Pointer offset
_length: 33, // Length of the date string
_match: 7 // Token matching count
}This date structure provides a parsing result. You will be able to tell from it how the date string was parsed(, or why the parsing was failed).
This function takes either exactly the same parameters with the parse() or a date structure which the preparse() returns, evaluates the validity of them.
date.isValid('2015/01/02 23:14:05', 'YYYY/MM/DD HH:mm:ss'); // => true
date.isValid('29-02-2015', 'DD-MM-YYYY'); // => falseconst result = date.preparse('2015/01/02 23:14:05', 'YYYY/MM/DD HH:mm:ss');
date.isValid(result); // => trueThis function transforms the format of a date string. The 2nd parameter, arg1, is the format string of it. Available token list is equal to the parse()’s. The 3rd parameter, arg2, is the transformed format string. Available token list is equal to the format()’s.
// 3/8/2020 => 8/3/2020
date.transform('3/8/2020', 'D/M/YYYY', 'M/D/YYYY');
// 13:05 => 01:05 PM
date.transform('13:05', 'HH:mm', 'hh:mm A');const today = new Date(2015, 0, 2);
const yesterday = new Date(2015, 0, 1);
date.subtract(today, yesterday).toDays(); // => 1 = today - yesterday
date.subtract(today, yesterday).toHours(); // => 24
date.subtract(today, yesterday).toMinutes(); // => 1440
date.subtract(today, yesterday).toSeconds(); // => 86400
date.subtract(today, yesterday).toMilliseconds(); // => 86400000const date1 = new Date(2017, 0, 2, 0); // Jan 2 2017 00:00:00
const date2 = new Date(2017, 0, 2, 23, 59); // Jan 2 2017 23:59:00
const date3 = new Date(2017, 0, 1, 23, 59); // Jan 1 2017 23:59:00
date.isSameDay(date1, date2); // => true
date.isSameDay(date1, date3); // => falseIt returns a current language code if called without any parameters.
To switch to any other language, call it with a language code.
See LOCALE.md for details.
Extend a current locale. See EXTEND.md for details.
Plugin is a named locale definition defined with the extend(). See PLUGINS.md for details.
Chrome, Firefox, Safari, Edge, and Internet Explorer 6+.
This is a library to generate and consume the source map format described here.
npm install source-map
<script src="https://raw.githubusercontent.com/mozilla/source-map/master/dist/source-map.min.js" defer></script>
var rawSourceMap = {
version: 3,
file: 'min.js',
names: ['bar', 'baz', 'n'],
sources: ['one.js', 'two.js'],
sourceRoot: 'http://example.com/www/js/',
mappings: 'CAAC,IAAI,IAAM,SAAUA,GAClB,OAAOC,IAAID;CCDb,IAAI,IAAM,SAAUE,GAClB,OAAOA'
};
var smc = new SourceMapConsumer(rawSourceMap);
console.log(smc.sources);
// [ 'http://example.com/www/js/one.js',
// 'http://example.com/www/js/two.js' ]
console.log(smc.originalPositionFor({
line: 2,
column: 28
}));
// { source: 'http://example.com/www/js/two.js',
// line: 2,
// column: 10,
// name: 'n' }
console.log(smc.generatedPositionFor({
source: 'http://example.com/www/js/two.js',
line: 2,
column: 10
}));
// { line: 2, column: 28 }
smc.eachMapping(function (m) {
// ...
});In depth guide: Compiling to JavaScript, and Debugging with Source Maps
function compile(ast) {
switch (ast.type) {
case 'BinaryExpression':
return new SourceNode(
ast.location.line,
ast.location.column,
ast.location.source,
[compile(ast.left), " + ", compile(ast.right)]
);
case 'Literal':
return new SourceNode(
ast.location.line,
ast.location.column,
ast.location.source,
String(ast.value)
);
// ...
default:
throw new Error("Bad AST");
}
}
var ast = parse("40 + 2", "add.js");
console.log(compile(ast).toStringWithSourceMap({
file: 'add.js'
}));
// { code: '40 + 2',
// map: [object SourceMapGenerator] }var map = new SourceMapGenerator({
file: "source-mapped.js"
});
map.addMapping({
generated: {
line: 10,
column: 35
},
source: "foo.js",
original: {
line: 33,
column: 2
},
name: "christopher"
});
console.log(map.toString());
// '{"version":3,"file":"source-mapped.js","sources":["foo.js"],"names":["christopher"],"mappings":";;;;;;;;;mCAgCEA"}'Get a reference to the module:
// Node.js
var sourceMap = require('source-map');
// Browser builds
var sourceMap = window.sourceMap;
// Inside Firefox
const sourceMap = require("devtools/toolkit/sourcemap/source-map.js");A SourceMapConsumer instance represents a parsed source map which we can query for information about the original file positions by giving it a file position in the generated source.
The only parameter is the raw source map (either as a string which can be JSON.parse’d, or an object). According to the spec, source maps have the following attributes:
version: Which version of the source map spec this map is following.
sources: An array of URLs to the original source files.
names: An array of identifiers which can be referenced by individual mappings.
sourceRoot: Optional. The URL root from which all sources are relative.
sourcesContent: Optional. An array of contents of the original source files.
mappings: A string of base64 VLQs which contain the actual mappings.
file: Optional. The generated filename this source map is associated with.
Compute the last column for each generated mapping. The last column is inclusive.
// Before:
consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1 },
// { line: 2,
// column: 10 },
// { line: 2,
// column: 20 } ]
consumer.computeColumnSpans();
// After:
consumer.allGeneratedPositionsFor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1,
// lastColumn: 9 },
// { line: 2,
// column: 10,
// lastColumn: 19 },
// { line: 2,
// column: 20,
// lastColumn: Infinity } ]Returns the original source, line, and column information for the generated source’s line and column positions provided. The only argument is an object with the following properties:
line: The line number in the generated source. Line numbers in this library are 1-based (note that the underlying source map specification uses 0-based line numbers – this library handles the translation).
column: The column number in the generated source. Column numbers in this library are 0-based.
bias: Either SourceMapConsumer.GREATEST_LOWER_BOUND or SourceMapConsumer.LEAST_UPPER_BOUND. Specifies whether to return the closest element that is smaller than or greater than the one we are searching for, respectively, if the exact element cannot be found. Defaults to SourceMapConsumer.GREATEST_LOWER_BOUND.
and an object is returned with the following properties:
source: The original source file, or null if this information is not available.
line: The line number in the original source, or null if this information is not available. The line number is 1-based.
column: The column number in the original source, or null if this information is not available. The column number is 0-based.
name: The original identifier, or null if this information is not available.
consumer.originalPositionFor({ line: 2, column: 10 })
// { source: 'foo.coffee',
// line: 2,
// column: 2,
// name: null }
consumer.originalPositionFor({ line: 99999999999999999, column: 999999999999999 })
// { source: null,
// line: null,
// column: null,
// name: null }Returns the generated line and column information for the original source, line, and column positions provided. The only argument is an object with the following properties:
source: The filename of the original source.
line: The line number in the original source. The line number is 1-based.
column: The column number in the original source. The column number is 0-based.
and an object is returned with the following properties:
line: The line number in the generated source, or null. The line number is 1-based.
column: The column number in the generated source, or null. The column number is 0-based.
consumer.generatedPositionFor({ source: "example.js", line: 2, column: 10 })
// { line: 1,
// column: 56 }Returns all generated line and column information for the original source, line, and column provided. If no column is provided, returns all mappings corresponding to a either the line we are searching for or the next closest line that has any mappings. Otherwise, returns all mappings corresponding to the given line and either the column we are searching for or the next closest column that has any offsets.
The only argument is an object with the following properties:
source: The filename of the original source.
line: The line number in the original source. The line number is 1-based.
column: Optional. The column number in the original source. The column number is 0-based.
and an array of objects is returned, each with the following properties:
line: The line number in the generated source, or null. The line number is 1-based.
column: The column number in the generated source, or null. The column number is 0-based.
consumer.allGeneratedpositionsfor({ line: 2, source: "foo.coffee" })
// [ { line: 2,
// column: 1 },
// { line: 2,
// column: 10 },
// { line: 2,
// column: 20 } ]Return true if we have the embedded source content for every source listed in the source map, false otherwise.
In other words, if this method returns true, then consumer.sourceContentFor(s) will succeed for every source s in consumer.sources.
// ...
if (consumer.hasContentsOfAllSources()) {
consumerReadyCallback(consumer);
} else {
fetchSources(consumer, consumerReadyCallback);
}
// ...Returns the original source content for the source provided. The only argument is the URL of the original source file.
If the source content for the given source is not found, then an error is thrown. Optionally, pass true as the second param to have null returned instead.
consumer.sources
// [ "my-cool-lib.clj" ]
consumer.sourceContentFor("my-cool-lib.clj")
// "..."
consumer.sourceContentFor("this is not in the source map");
// Error: "this is not in the source map" is not in the source map
consumer.sourceContentFor("this is not in the source map", true);
// nullIterate over each mapping between an original source/line/column and a generated line/column in this source map.
callback: The function that is called with each mapping. Mappings have the form { source, generatedLine, generatedColumn, originalLine, originalColumn, name }
context: Optional. If specified, this object will be the value of this every time that callback is called.
order: Either SourceMapConsumer.GENERATED_ORDER or SourceMapConsumer.ORIGINAL_ORDER. Specifies whether you want to iterate over the mappings sorted by the generated file’s line/column order or the original’s source/line/column order, respectively. Defaults to SourceMapConsumer.GENERATED_ORDER.
consumer.eachMapping(function (m) { console.log(m); })
// ...
// { source: 'illmatic.js',
// generatedLine: 1,
// generatedColumn: 0,
// originalLine: 1,
// originalColumn: 0,
// name: null }
// { source: 'illmatic.js',
// generatedLine: 2,
// generatedColumn: 0,
// originalLine: 2,
// originalColumn: 0,
// name: null }
// ...An instance of the SourceMapGenerator represents a source map which is being built incrementally.
You may pass an object with the following properties:
file: The filename of the generated source that this source map is associated with.
sourceRoot: A root for all relative URLs in this source map.
skipValidation: Optional. When true, disables validation of mappings as they are added. This can improve performance but should be used with discretion, as a last resort. Even then, one should avoid using this flag when running tests, if possible.
var generator = new sourceMap.SourceMapGenerator({
file: "my-generated-javascript-file.js",
sourceRoot: "http://example.com/app/js/"
});Creates a new SourceMapGenerator from an existing SourceMapConsumer instance.
sourceMapConsumer The SourceMap.Add a single mapping from original source line and column to the generated source’s line and column for this source map being created. The mapping object should have the following properties:
generated: An object with the generated line and column positions.
original: An object with the original line and column positions.
source: The original source file (relative to the sourceRoot).
name: An optional original token name for this mapping.
generator.addMapping({
source: "module-one.scm",
original: { line: 128, column: 0 },
generated: { line: 3, column: 456 }
})Set the source content for an original source file.
sourceFile the URL of the original source file.
sourceContent the content of the source file.
Applies a SourceMap for a source file to the SourceMap. Each mapping to the supplied source file is rewritten using the supplied SourceMap. Note: The resolution for the resulting mappings is the minimum of this map and the supplied map.
sourceMapConsumer: The SourceMap to be applied.
sourceFile: Optional. The filename of the source file. If omitted, sourceMapConsumer.file will be used, if it exists. Otherwise an error will be thrown.
sourceMapPath: Optional. The dirname of the path to the SourceMap to be applied. If relative, it is relative to the SourceMap.
This parameter is needed when the two SourceMaps aren’t in the same directory, and the SourceMap to be applied contains relative source paths. If so, those relative source paths need to be rewritten relative to the SourceMap.
If omitted, it is assumed that both SourceMaps are in the same directory, thus not needing any rewriting. (Supplying '.' has the same effect.)
Renders the source map being generated to a string.
generator.toString()
// '{"version":3,"sources":["module-one.scm"],"names":[],"mappings":"...snip...","file":"my-generated-javascript-file.js","sourceRoot":"http://example.com/app/js/"}'SourceNodes provide a way to abstract over interpolating and/or concatenating snippets of generated JavaScript source code, while maintaining the line and column information associated between those snippets and the original source code. This is useful as the final intermediate representation a compiler might use before outputting the generated JS and source map.
line: The original line number associated with this source node, or null if it isn’t associated with an original line. The line number is 1-based.
column: The original column number associated with this source node, or null if it isn’t associated with an original column. The column number is 0-based.
source: The original source’s filename; null if no filename is provided.
chunk: Optional. Is immediately passed to SourceNode.prototype.add, see below.
name: Optional. The original identifier.
var node = new SourceNode(1, 2, "a.cpp", [
new SourceNode(3, 4, "b.cpp", "extern int status;\n"),
new SourceNode(5, 6, "c.cpp", "std::string* make_string(size_t n);\n"),
new SourceNode(7, 8, "d.cpp", "int main(int argc, char** argv) {}\n"),
]);Creates a SourceNode from generated code and a SourceMapConsumer.
code: The generated code
sourceMapConsumer The SourceMap for the generated code
relativePath The optional path that relative sources in sourceMapConsumer should be relative to.
var consumer = new SourceMapConsumer(fs.readFileSync("path/to/my-file.js.map", "utf8"));
var node = SourceNode.fromStringWithSourceMap(fs.readFileSync("path/to/my-file.js"),
consumer);Add a chunk of generated JS to this source node.
chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.Prepend a chunk of generated JS to this source node.
chunk: A string snippet of generated JS code, another instance of SourceNode, or an array where each member is one of those things.Set the source content for a source file. This will be added to the SourceMap in the sourcesContent field.
sourceFile: The filename of the source file
sourceContent: The content of the source file
Walk over the tree of JS snippets in this node and its children. The walking function is called once for each snippet of JS and is passed that snippet and the its original associated source’s line/column location.
fn: The traversal function.var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.walk(function (code, loc) { console.log("WALK:", code, loc); })
// WALK: uno { source: 'b.js', line: 3, column: 4, name: null }
// WALK: dos { source: 'a.js', line: 1, column: 2, name: null }
// WALK: tres { source: 'a.js', line: 1, column: 2, name: null }
// WALK: quatro { source: 'c.js', line: 5, column: 6, name: null }Walk over the tree of SourceNodes. The walking function is called for each source file content and is passed the filename and source content.
fn: The traversal function.var a = new SourceNode(1, 2, "a.js", "generated from a");
a.setSourceContent("a.js", "original a");
var b = new SourceNode(1, 2, "b.js", "generated from b");
b.setSourceContent("b.js", "original b");
var c = new SourceNode(1, 2, "c.js", "generated from c");
c.setSourceContent("c.js", "original c");
var node = new SourceNode(null, null, null, [a, b, c]);
node.walkSourceContents(function (source, contents) { console.log("WALK:", source, ":", contents); })
// WALK: a.js : original a
// WALK: b.js : original b
// WALK: c.js : original cLike Array.prototype.join except for SourceNodes. Inserts the separator between each of this source node’s children.
sep: The separator.var lhs = new SourceNode(1, 2, "a.rs", "my_copy");
var operand = new SourceNode(3, 4, "a.rs", "=");
var rhs = new SourceNode(5, 6, "a.rs", "orig.clone()");
var node = new SourceNode(null, null, null, [ lhs, operand, rhs ]);
var joinedNode = node.join(" ");Call String.prototype.replace on the very right-most source snippet. Useful for trimming white space from the end of a source node, etc.
pattern: The pattern to replace.
replacement: The thing to replace the pattern with.
Return the string representation of this source node. Walks over the tree and concatenates all the various snippets together to one string.
var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.toString()
// 'unodostresquatro'Returns the string representation of this tree of source nodes, plus a SourceMapGenerator which contains all the mappings between the generated and original sources.
The arguments are the same as those to new SourceMapGenerator.
var node = new SourceNode(1, 2, "a.js", [
new SourceNode(3, 4, "b.js", "uno"),
"dos",
[
"tres",
new SourceNode(5, 6, "c.js", "quatro")
]
]);
node.toStringWithSourceMap({ file: "my-output-file.js" })
// { code: 'unodostresquatro',
// map: [object SourceMapGenerator] }Parse, convert, fingerprint and use SSH keys (both public and private) in pure node – no ssh-keygen or other external dependencies.
This library has been extracted from node-http-signature (work by Mark Cavage and Dave Eddy) and node-ssh-fingerprint (work by Dave Eddy), with additions (including ECDSA support) by Alex Wilson.
npm install sshpk
var sshpk = require('sshpk');
var fs = require('fs');
/* Read in an OpenSSH-format public key */
var keyPub = fs.readFileSync('id_rsa.pub');
var key = sshpk.parseKey(keyPub, 'ssh');
/* Get metadata about the key */
console.log('type => %s', key.type);
console.log('size => %d bits', key.size);
console.log('comment => %s', key.comment);
/* Compute key fingerprints, in new OpenSSH (>6.7) format, and old MD5 */
console.log('fingerprint => %s', key.fingerprint().toString());
console.log('old-style fingerprint => %s', key.fingerprint('md5').toString());Example output:
type => rsa
size => 2048 bits
comment => foo@foo.com
fingerprint => SHA256:PYC9kPVC6J873CSIbfp0LwYeczP/W4ffObNCuDJ1u5w
old-style fingerprint => a0:c8:ad:6c:32:9a:32:fa:59:cc:a9:8c:0a:0d:6e:bd
More examples: converting between formats:
/* Read in a PEM public key */
var keyPem = fs.readFileSync('id_rsa.pem');
var key = sshpk.parseKey(keyPem, 'pem');
/* Convert to PEM PKCS#8 public key format */
var pemBuf = key.toBuffer('pkcs8');
/* Convert to SSH public key format (and return as a string) */
var sshKey = key.toString('ssh');Signing and verifying:
/* Read in an OpenSSH/PEM *private* key */
var keyPriv = fs.readFileSync('id_ecdsa');
var key = sshpk.parsePrivateKey(keyPriv, 'pem');
var data = 'some data';
/* Sign some data with the key */
var s = key.createSign('sha1');
s.update(data);
var signature = s.sign();
/* Now load the public key (could also use just key.toPublic()) */
var keyPub = fs.readFileSync('id_ecdsa.pub');
key = sshpk.parseKey(keyPub, 'ssh');
/* Make a crypto.Verifier with this key */
var v = key.createVerify('sha1');
v.update(data);
var valid = v.verify(signature);
/* => true! */Matching fingerprints with keys:
var fp = sshpk.parseFingerprint('SHA256:PYC9kPVC6J873CSIbfp0LwYeczP/W4ffObNCuDJ1u5w');
var keys = [sshpk.parseKey(...), sshpk.parseKey(...), ...];
keys.forEach(function (key) {
if (fp.matches(key))
console.log('found it!');
});parseKey(data[, format = 'auto'[, options]])Parses a key from a given data format and returns a new Key object.
Parameters
data – Either a Buffer or String, containing the keyformat – String name of format to use, valid options are:
auto: choose automatically from all belowpem: supports both PKCS#1 and PKCS#8ssh: standard OpenSSH format,pkcs1, pkcs8: variants of pemrfc4253: raw OpenSSH wire formatopenssh: new post-OpenSSH 6.5 internal format, produced by ssh-keygen -odnssec: .key file format output by dnssec-keygen etcputty: the PuTTY .ppk file format (supports truncated variant without all the lines from Private-Lines: onwards)options – Optional Object, extra options, with keys:
filename – Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messagespassphrase – Optional String, encryption passphrase used to decrypt an encrypted PEM fileKey.isKey(obj)Returns true if the given object is a valid Key object created by a version of sshpk compatible with this one.
Parameters
obj – Object to identifyKey#typeString, the type of key. Valid options are rsa, dsa, ecdsa.
Key#sizeInteger, “size” of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use.
Key#commentOptional string, a key comment used by some formats (eg the ssh format).
Key#curveOnly present if this.type === 'ecdsa', string containing the name of the named curve used with this key. Possible values include nistp256, nistp384 and nistp521.
Key#toBuffer([format = 'ssh'])Convert the key into a given data format and return the serialized key as a Buffer.
Parameters
format – String name of format to use, for valid options see parseKey()Key#toString([format = ssh])Same as this.toBuffer(format).toString().
Key#fingerprint([algorithm = 'sha256'[, hashType = 'ssh']])Creates a new Fingerprint object representing this Key’s fingerprint.
Parameters
algorithm – String name of hash algorithm to use, valid options are md5, sha1, sha256, sha384, sha512hashType – String name of fingerprint hash type to use, valid options are ssh (the type of fingerprint used by OpenSSH, e.g. in ssh-keygen), spki (used by HPKP, some OpenSSL applications)Key#createVerify([hashAlgorithm])Creates a crypto.Verifier specialized to use this Key (and the correct public key algorithm to match it). The returned Verifier has the same API as a regular one, except that the verify() function takes only the target signature as an argument.
Parameters
hashAlgorithm – optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including sha1, sha256.v.verify(signature[, format]) Parameters
signature – either a Signature object, or a Buffer or Stringformat – optional String, name of format to interpret given String with. Not valid if signature is a Signature or Buffer.Key#createDiffieHellman()Key#createDH()Creates a Diffie-Hellman key exchange object initialized with this key and all necessary parameters. This has the same API as a crypto.DiffieHellman instance, except that functions take Key and PrivateKey objects as arguments, and return them where indicated for.
This is only valid for keys belonging to a cryptosystem that supports DHE or a close analogue (i.e. dsa, ecdsa and curve25519 keys). An attempt to call this function on other keys will yield an Error.
parsePrivateKey(data[, format = 'auto'[, options]])Parses a private key from a given data format and returns a new PrivateKey object.
Parameters
data – Either a Buffer or String, containing the keyformat – String name of format to use, valid options are:
auto: choose automatically from all belowpem: supports both PKCS#1 and PKCS#8ssh, openssh: new post-OpenSSH 6.5 internal format, produced by ssh-keygen -opkcs1, pkcs8: variants of pemrfc4253: raw OpenSSH wire formatdnssec: .private format output by dnssec-keygen etc.options – Optional Object, extra options, with keys:
filename – Optional String, name for the key being parsed (eg. the filename that was opened). Used to generate Error messagespassphrase – Optional String, encryption passphrase used to decrypt an encrypted PEM filegeneratePrivateKey(type[, options])Generates a new private key of a certain key type, from random data.
Parameters
type – String, type of key to generate. Currently supported are 'ecdsa' and 'ed25519'options – optional Object, with keys:
curve – optional String, for 'ecdsa' keys, specifies the curve to use. If ECDSA is specified and this option is not given, defaults to using 'nistp256'.PrivateKey.isPrivateKey(obj)Returns true if the given object is a valid PrivateKey object created by a version of sshpk compatible with this one.
Parameters
obj – Object to identifyPrivateKey#typeString, the type of key. Valid options are rsa, dsa, ecdsa.
PrivateKey#sizeInteger, “size” of the key in bits. For RSA/DSA this is the size of the modulus; for ECDSA this is the bit size of the curve in use.
PrivateKey#curveOnly present if this.type === 'ecdsa', string containing the name of the named curve used with this key. Possible values include nistp256, nistp384 and nistp521.
PrivateKey#toBuffer([format = 'pkcs1'])Convert the key into a given data format and return the serialized key as a Buffer.
Parameters
format – String name of format to use, valid options are listed under parsePrivateKey. Note that ED25519 keys default to openssh format instead (as they have no pkcs1 representation).PrivateKey#toString([format = 'pkcs1'])Same as this.toBuffer(format).toString().
PrivateKey#toPublic()Extract just the public part of this private key, and return it as a Key object.
PrivateKey#fingerprint([algorithm = 'sha256'])Same as this.toPublic().fingerprint().
PrivateKey#createVerify([hashAlgorithm])Same as this.toPublic().createVerify().
PrivateKey#createSign([hashAlgorithm])Creates a crypto.Sign specialized to use this PrivateKey (and the correct key algorithm to match it). The returned Signer has the same API as a regular one, except that the sign() function takes no arguments, and returns a Signature object.
Parameters
hashAlgorithm – optional String name of hash algorithm to use, any supported by OpenSSL are valid, usually including sha1, sha256.v.sign() Parameters
PrivateKey#derive(newType)Derives a related key of type newType from this key. Currently this is only supported to change between ed25519 and curve25519 keys which are stored with the same private key (but usually distinct public keys in order to avoid degenerate keys that lead to a weak Diffie-Hellman exchange).
Parameters
newType – String, type of key to derive, either ed25519 or curve25519parseFingerprint(fingerprint[, options])Pre-parses a fingerprint, creating a Fingerprint object that can be used to quickly locate a key by using the Fingerprint#matches function.
Parameters
fingerprint – String, the fingerprint value, in any supported formatoptions – Optional Object, with properties:
algorithms – Array of strings, names of hash algorithms to limit support to. If fingerprint uses a hash algorithm not on this list, throws InvalidAlgorithmError.hashType – String, the type of hash the fingerprint uses, either ssh or spki (normally auto-detected based on the format, but can be overridden)type – String, the entity this fingerprint identifies, either key or certificateFingerprint.isFingerprint(obj)Returns true if the given object is a valid Fingerprint object created by a version of sshpk compatible with this one.
Parameters
obj – Object to identifyFingerprint#toString([format])Returns a fingerprint as a string, in the given format.
Parameters
format – Optional String, format to use, valid options are hex and base64. If this Fingerprint uses the md5 algorithm, the default format is hex. Otherwise, the default is base64.Fingerprint#matches(keyOrCertificate)Verifies whether or not this Fingerprint matches a given Key or Certificate. This function uses double-hashing to avoid leaking timing information. Returns a boolean.
Note that a Key-type Fingerprint will always return false if asked to match a Certificate and vice versa.
Parameters
keyOrCertificate – a Key object or Certificate object, the entity to match this fingerprint againstparseSignature(signature, algorithm, format)Parses a signature in a given format, creating a Signature object. Useful for converting between the SSH and ASN.1 (PKCS/OpenSSL) signature formats, and also returned as output from PrivateKey#createSign().sign().
A Signature object can also be passed to a verifier produced by Key#createVerify() and it will automatically be converted internally into the correct format for verification.
Parameters
signature – a Buffer (binary) or String (base64), data of the actual signature in the given formatalgorithm – a String, name of the algorithm to be used, possible values are rsa, dsa, ecdsaformat – a String, either asn1 or sshSignature.isSignature(obj)Returns true if the given object is a valid Signature object created by a version of sshpk compatible with this one.
Parameters
obj – Object to identifySignature#toBuffer([format = 'asn1'])Converts a Signature to the given format and returns it as a Buffer.
Parameters
format – a String, either asn1 or sshSignature#toString([format = 'asn1'])Same as this.toBuffer(format).toString('base64').
sshpk includes basic support for parsing certificates in X.509 (PEM) format and the OpenSSH certificate format. This feature is intended to be used mainly to access basic metadata about certificates, extract public keys from them, and also to generate simple self-signed certificates from an existing key.
Notably, there is no implementation of CA chain-of-trust verification, and only very minimal support for key usage restrictions. Please do the security world a favour, and DO NOT use this code for certificate verification in the traditional X.509 CA chain style.
parseCertificate(data, format)Parameters
data – a Buffer or Stringformat – a String, format to use, one of 'openssh', 'pem' (X.509 in a PEM wrapper), or 'x509' (raw DER encoded)createSelfSignedCertificate(subject, privateKey[, options])Parameters
subject – an Identity, the subject of the certificateprivateKey – a PrivateKey, the key of the subject: will be used both to be placed in the certificate and also to sign it (since this is a self-signed certificate)options – optional Object, with keys:
lifetime – optional Number, lifetime of the certificate from now in secondsvalidFrom, validUntil – optional Dates, beginning and end of certificate validity period. If given lifetime will be ignoredserial – optional Buffer, the serial number of the certificatepurposes – optional Array of String, X.509 key usage restrictionscreateCertificate(subject, key, issuer, issuerKey[, options])Parameters
subject – an Identity, the subject of the certificatekey – a Key, the public key of the subjectissuer – an Identity, the issuer of the certificate who will sign itissuerKey – a PrivateKey, the issuer’s private key for signingoptions – optional Object, with keys:
lifetime – optional Number, lifetime of the certificate from now in secondsvalidFrom, validUntil – optional Dates, beginning and end of certificate validity period. If given lifetime will be ignoredserial – optional Buffer, the serial number of the certificatepurposes – optional Array of String, X.509 key usage restrictionsCertificate#subjectsArray of Identity instances describing the subject of this certificate.
Certificate#issuerThe Identity of the Certificate’s issuer (signer).
Certificate#subjectKeyThe public key of the subject of the certificate, as a Key instance.
Certificate#issuerKeyThe public key of the signing issuer of this certificate, as a Key instance. May be undefined if the issuer’s key is unknown (e.g. on an X509 certificate).
Certificate#serialThe serial number of the certificate. As this is normally a 64-bit or wider integer, it is returned as a Buffer.
Certificate#purposesArray of Strings indicating the X.509 key usage purposes that this certificate is valid for. The possible strings at the moment are:
'signature' – key can be used for digital signatures'identity' – key can be used to attest about the identity of the signer (X.509 calls this nonRepudiation)'codeSigning' – key can be used to sign executable code'keyEncryption' – key can be used to encrypt other keys'encryption' – key can be used to encrypt data (only applies for RSA)'keyAgreement' – key can be used for key exchange protocols such as Diffie-Hellman'ca' – key can be used to sign other certificates (is a Certificate Authority)'crl' – key can be used to sign Certificate Revocation Lists (CRLs)Certificate#getExtension(nameOrOid)Retrieves information about a certificate extension, if present, or returns undefined if not. The string argument nameOrOid should be either the OID (for X509 extensions) or the name (for OpenSSH extensions) of the extension to retrieve.
The object returned will have the following properties:
format – String, set to either 'x509' or 'openssh'name or oid – String, only one set based on value of formatdata – Buffer, the raw data inside the extensionCertificate#getExtensions()Returns an Array of all present certificate extensions, in the same manner and format as getExtension().
Certificate#isExpired([when])Tests whether the Certificate is currently expired (i.e. the validFrom and validUntil dates specify a range of time that does not include the current time).
Parameters
when – optional Date, if specified, tests whether the Certificate was or will be expired at the specified time instead of nowReturns a Boolean.
Certificate#isSignedByKey(key)Tests whether the Certificate was validly signed by the given (public) Key.
Parameters
key – a Key instanceReturns a Boolean.
Certificate#isSignedBy(certificate)Tests whether this Certificate was validly signed by the subject of the given certificate. Also tests that the issuer Identity of this Certificate and the subject Identity of the other Certificate are equivalent.
Parameters
certificate – another Certificate instanceReturns a Boolean.
Certificate#fingerprint([hashAlgo])Returns the X509-style fingerprint of the entire certificate (as a Fingerprint instance). This matches what a web-browser or similar would display as the certificate fingerprint and should not be confused with the fingerprint of the subject’s public key.
Parameters
hashAlgo – an optional String, any hash function nameCertificate#toBuffer([format])Serializes the Certificate to a Buffer and returns it.
Parameters
format – an optional String, output format, one of 'openssh', 'pem' or 'x509'. Defaults to 'x509'.Returns a Buffer.
Certificate#toString([format])format – an optional String, output format, one of 'openssh', 'pem' or 'x509'. Defaults to 'pem'.Returns a String.
identityForHost(hostname)Constructs a host-type Identity for a given hostname.
Parameters
hostname – the fully qualified DNS name of the hostReturns an Identity instance.
identityForUser(uid)Constructs a user-type Identity for a given UID.
Parameters
uid – a String, user identifier (login name)Returns an Identity instance.
identityForEmail(email)Constructs an email-type Identity for a given email address.
Parameters
email – a String, email addressReturns an Identity instance.
identityFromDN(dn)Parses an LDAP-style DN string (e.g. 'CN=foo, C=US') and turns it into an Identity instance.
Parameters
dn – a StringReturns an Identity instance.
identityFromArray(arr)Constructs an Identity from an array of DN components (see Identity#toArray() for the format).
Parameters
arr – an Array of Objects, DN components with name and valueReturns an Identity instance.
| Attribute name | OID |
|---|---|
cn |
2.5.4.3 |
o |
2.5.4.10 |
ou |
2.5.4.11 |
l |
2.5.4.7 |
s |
2.5.4.8 |
c |
2.5.4.6 |
sn |
2.5.4.4 |
postalCode |
2.5.4.17 |
serialNumber |
2.5.4.5 |
street |
2.5.4.9 |
x500UniqueIdentifier |
2.5.4.45 |
role |
2.5.4.72 |
telephoneNumber |
2.5.4.20 |
description |
2.5.4.13 |
dc |
0.9.2342.19200300.100.1.25 |
uid |
0.9.2342.19200300.100.1.1 |
mail |
0.9.2342.19200300.100.1.3 |
title |
2.5.4.12 |
gn |
2.5.4.42 |
initials |
2.5.4.43 |
pseudonym |
2.5.4.65 |
Identity#toString()Returns the identity as an LDAP-style DN string. e.g. 'CN=foo, O=bar corp, C=us'
Identity#typeThe type of identity. One of 'host', 'user', 'email' or 'unknown'
Identity#hostnameIdentity#uidIdentity#emailSet when type is 'host', 'user', or 'email', respectively. Strings.
Identity#cnThe value of the first CN= in the DN, if any. It’s probably better to use the #get() method instead of this property.
Identity#get(name[, asArray])Returns the value of a named attribute in the Identity DN. If there is no attribute of the given name, returns undefined. If multiple components of the DN contain an attribute of this name, an exception is thrown unless the asArray argument is given as true – then they will be returned as an Array in the same order they appear in the DN.
Parameters
name – a StringasArray – an optional BooleanIdentity#toArray()Returns the Identity as an Array of DN component objects. This looks like:
Each object has a name and a value property. The returned objects may be safely modified.
InvalidAlgorithmErrorThe specified algorithm is not valid, either because it is not supported, or because it was not included on a list of allowed algorithms.
Thrown by Fingerprint.parse, Key#fingerprint.
Properties
algorithm – the algorithm that could not be validatedFingerprintFormatErrorThe fingerprint string given could not be parsed as a supported fingerprint format, or the specified fingerprint format is invalid.
Thrown by Fingerprint.parse, Fingerprint#toString.
Properties
fingerprint – if caused by a fingerprint, the string value givenformat – if caused by an invalid format specification, the string value givenKeyParseErrorThe key data given could not be parsed as a valid key.
Properties
keyName – filename that was given to parseKeyformat – the format that was trying to parse the key (see parseKey)innerErr – the inner Error thrown by the format parserKeyEncryptedErrorThe key is encrypted with a symmetric key (ie, it is password protected). The parsing operation would succeed if it was given the passphrase option.
Properties
keyName – filename that was given to parseKeyformat – the format that was trying to parse the key (currently can only be "pem")CertificateParseErrorThe certificate data given could not be parsed as a valid certificate.
Properties
certName – filename that was given to parseCertificateformat – the format that was trying to parse the key (see parseCertificate)innerErr – the inner Error thrown by the format parsersshpk-agent is a library for speaking the ssh-agent protocol from node.js, which uses sshpk
Blazing fast and accurate glob matcher written in JavaScript. No dependencies and full support for standard and extended Bash glob features, including braces, extglobs, POSIX brackets, and regular expressions.
* and ?), globstars (**) for nested directories, advanced globbing with extglobs, braces, and POSIX brackets, and support for escaping special characters with \ or quotes.See the library comparison to other libraries.
Click to expand
(TOC generated by verb using markdown-toc)
Install with npm:
The main export is a function that takes a glob pattern and an options object and returns a function for matching strings.
const pm = require('picomatch');
const isMatch = pm('*.js');
console.log(isMatch('abcd')); //=> false
console.log(isMatch('a.js')); //=> true
console.log(isMatch('a.md')); //=> false
console.log(isMatch('a/b.js')); //=> falseCreates a matcher function from one or more glob patterns. The returned function takes a string to match as its first argument, and returns true if the string is a match. The returned matcher function also takes a boolean as the second argument that, when true, returns an object with additional information.
Params
globs {String|Array}: One or more glob patterns.options {Object=}returns {Function=}: Returns a matcher function.Example
const picomatch = require('picomatch');
// picomatch(glob[, options]);
const isMatch = picomatch('*.!(*a)');
console.log(isMatch('a.a')); //=> false
console.log(isMatch('a.b')); //=> trueTest input with the given regex. This is used by the main picomatch() function to test the input string.
Params
input {String}: String to test.regex {RegExp}returns {Object}: Returns an object with matching info.Example
const picomatch = require('picomatch');
// picomatch.test(input, regex[, options]);
console.log(picomatch.test('foo/bar', /^(?:([^/]*?)\/([^/]*?))$/));
// { isMatch: true, match: [ 'foo/', 'foo', 'bar' ], output: 'foo/bar' }Match the basename of a filepath.
Params
input {String}: String to test.glob {RegExp|String}: Glob pattern or regex created by .makeRe.returns {Boolean}Example
const picomatch = require('picomatch');
// picomatch.matchBase(input, glob[, options]);
console.log(picomatch.matchBase('foo/bar.js', '*.js'); // trueReturns true if any of the given glob patterns match the specified string.
Params
returns {Boolean}: Returns true if any patterns match strExample
const picomatch = require('picomatch');
// picomatch.isMatch(string, patterns[, options]);
console.log(picomatch.isMatch('a.a', ['b.*', '*.a'])); //=> true
console.log(picomatch.isMatch('a.a', 'b.*')); //=> falseParse a glob pattern to create the source string for a regular expression.
Params
pattern {String}options {Object}returns {Object}: Returns an object with useful properties and output to be used as a regex source string.Example
Scan a glob pattern to separate the pattern into segments.
Params
input {String}: Glob pattern to scan.options {Object}returns {Object}: Returns an object withExample
const picomatch = require('picomatch');
// picomatch.scan(input[, options]);
const result = picomatch.scan('!./foo/*.js');
console.log(result);
{ prefix: '!./',
input: '!./foo/*.js',
start: 3,
base: 'foo',
glob: '*.js',
isBrace: false,
isBracket: false,
isGlob: true,
isExtglob: false,
isGlobstar: false,
negated: true }Create a regular expression from a parsed glob pattern.
Params
state {String}: The object returned from the .parse method.options {Object}returns {RegExp}: Returns a regex created from the given pattern.Example
const picomatch = require('picomatch');
const state = picomatch.parse('*.js');
// picomatch.compileRe(state[, options]);
console.log(picomatch.compileRe(state));
//=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/Create a regular expression from the given regex source string.
Params
source {String}: Regular expression source string.options {Object}returns {RegExp}Example
const picomatch = require('picomatch');
// picomatch.toRegex(source[, options]);
const { output } = picomatch.parse('*.js');
console.log(picomatch.toRegex(output));
//=> /^(?:(?!\.)(?=.)[^/]*?\.js)$/The following options may be used with the main picomatch() function or any of the methods on the picomatch API.
| Option | Type | Default value | Description |
|---|---|---|---|
basename |
boolean |
false |
If set, then patterns without slashes will be matched against the basename of the path if it contains slashes. For example, a?b would match the path /xyz/123/acb, but not /xyz/acb/123. |
bash |
boolean |
false |
Follow bash matching rules more strictly - disallows backslashes as escape characters, and treats single stars as globstars (**). |
capture |
boolean |
undefined |
Return regex matches in supporting methods. |
contains |
boolean |
undefined |
Allows glob to match any part of the given string(s). |
cwd |
string |
process.cwd() |
Current working directory. Used by picomatch.split() |
debug |
boolean |
undefined |
Debug regular expressions when an error is thrown. |
dot |
boolean |
false |
Enable dotfile matching. By default, dotfiles are ignored unless a . is explicitly defined in the pattern, or options.dot is true |
expandRange |
function |
undefined |
Custom function for expanding ranges in brace patterns, such as {a..z}. The function receives the range values as two arguments, and it must return a string to be used in the generated regex. It’s recommended that returned strings be wrapped in parentheses. |
failglob |
boolean |
false |
Throws an error if no matches are found. Based on the bash option of the same name. |
fastpaths |
boolean |
true |
To speed up processing, full parsing is skipped for a handful common glob patterns. Disable this behavior by setting this option to false. |
flags |
boolean |
undefined |
Regex flags to use in the generated regex. If defined, the nocase option will be overridden. |
| format | function |
undefined |
Custom function for formatting the returned string. This is useful for removing leading slashes, converting Windows paths to Posix paths, etc. |
ignore |
array\|string |
undefined |
One or more glob patterns for excluding strings that should not be matched from the result. |
keepQuotes |
boolean |
false |
Retain quotes in the generated regex, since quotes may also be used as an alternative to backslashes. |
literalBrackets |
boolean |
undefined |
When true, brackets in the glob pattern will be escaped so that only literal brackets will be matched. |
lookbehinds |
boolean |
true |
|
matchBase |
boolean |
false |
Alias for basename |
maxLength |
boolean |
65536 |
Limit the max length of the input string. An error is thrown if the input string is longer than this value. |
nobrace |
boolean |
false |
Disable brace matching, so that {a,b} and {1..3} would be treated as literal characters. |
nobracket |
boolean |
undefined |
Disable matching with regex brackets. |
nocase |
boolean |
false |
Make matching case-insensitive. Equivalent to the regex i flag. Note that this option is overridden by the flags option. |
nodupes |
boolean |
true |
Deprecated, use nounique instead. This option will be removed in a future major release. By default duplicates are removed. Disable uniquification by setting this option to false. |
noext |
boolean |
false |
Alias for noextglob |
noextglob |
boolean |
false |
Disable support for matching with extglobs (like +(a\|b)) |
noglobstar |
boolean |
false |
Disable support for matching nested directories with globstars (**) |
nonegate |
boolean |
false |
Disable support for negating with leading ! |
noquantifiers |
boolean |
false |
Disable support for regex quantifiers (like a{1,2}) and treat them as brace patterns to be expanded. |
| onIgnore | function |
undefined |
Function to be called on ignored items. |
| onMatch | function |
undefined |
Function to be called on matched items. |
| onResult | function |
undefined |
Function to be called on all items, regardless of whether or not they are matched or ignored. |
posix |
boolean |
false |
|
posixSlashes |
boolean |
undefined |
Convert all slashes in file paths to forward slashes. This does not convert slashes in the glob pattern itself |
prepend |
boolean |
undefined |
String to prepend to the generated regex used for matching. |
regex |
boolean |
false |
Use regular expression rules for + (instead of matching literal +), and for stars that follow closing parentheses or brackets (as in )* and ]*). |
strictBrackets |
boolean |
undefined |
Throw an error if brackets, braces, or parens are imbalanced. |
strictSlashes |
boolean |
undefined |
When true, picomatch won’t match trailing slashes with single stars. |
unescape |
boolean |
undefined |
Remove backslashes preceding escaped characters in the glob pattern. By default, backslashes are retained. |
unixify |
boolean |
undefined |
Alias for posixSlashes, for backwards compatibility. |
In addition to the main picomatch options, the following options may also be used with the .scan method.
| Option | Type | Default value | Description |
|---|---|---|---|
tokens |
boolean |
false |
When true, the returned object will include an array of tokens (objects), representing each path “segment” in the scanned glob pattern |
parts |
boolean |
false |
When true, the returned object will include an array of strings representing each path “segment” in the scanned glob pattern. This is automatically enabled when options.tokens is true |
Example
const picomatch = require('picomatch');
const result = picomatch.scan('!./foo/*.js', { tokens: true });
console.log(result);
// {
// prefix: '!./',
// input: '!./foo/*.js',
// start: 3,
// base: 'foo',
// glob: '*.js',
// isBrace: false,
// isBracket: false,
// isGlob: true,
// isExtglob: false,
// isGlobstar: false,
// negated: true,
// maxDepth: 2,
// tokens: [
// { value: '!./', depth: 0, isGlob: false, negated: true, isPrefix: true },
// { value: 'foo', depth: 1, isGlob: false },
// { value: '*.js', depth: 1, isGlob: true }
// ],
// slashes: [ 2, 6 ],
// parts: [ 'foo', '*.js' ]
// }Type: function
Default: undefined
Custom function for expanding ranges in brace patterns. The fill-range library is ideal for this purpose, or you can use custom code to do whatever you need.
Example
The following example shows how to create a glob that matches a folder
const fill = require('fill-range');
const regex = pm.makeRe('foo/{01..25}/bar', {
expandRange(a, b) {
return `(${fill(a, b, { toRegex: true })})`;
}
});
console.log(regex);
//=> /^(?:foo\/((?:0[1-9]|1[0-9]|2[0-5]))\/bar)$/
console.log(regex.test('foo/00/bar')) // false
console.log(regex.test('foo/01/bar')) // true
console.log(regex.test('foo/10/bar')) // true
console.log(regex.test('foo/22/bar')) // true
console.log(regex.test('foo/25/bar')) // true
console.log(regex.test('foo/26/bar')) // falseType: function
Default: undefined
Custom function for formatting strings before they’re matched.
Example
// strip leading './' from strings
const format = str => str.replace(/^\.\//, '');
const isMatch = picomatch('foo/*.js', { format });
console.log(isMatch('./foo/bar.js')); //=> trueconst onMatch = ({ glob, regex, input, output }) => {
console.log({ glob, regex, input, output });
};
const isMatch = picomatch('*', { onMatch });
isMatch('foo');
isMatch('bar');
isMatch('baz');const onIgnore = ({ glob, regex, input, output }) => {
console.log({ glob, regex, input, output });
};
const isMatch = picomatch('*', { onIgnore, ignore: 'f*' });
isMatch('foo');
isMatch('bar');
isMatch('baz');const onResult = ({ glob, regex, input, output }) => {
console.log({ glob, regex, input, output });
};
const isMatch = picomatch('*', { onResult, ignore: 'f*' });
isMatch('foo');
isMatch('bar');
isMatch('baz');
| Character | Description |
|---|---|
* |
Matches any character zero or more times, excluding path separators. Does not match path separators or hidden files or directories (“dotfiles”), unless explicitly enabled by setting the dot option to true. |
** |
Matches any character zero or more times, including path separators. Note that ** will only match path separators (/, and \\ on Windows) when they are the only characters in a path segment. Thus, foo**/bar is equivalent to foo*/bar, and foo/a**b/bar is equivalent to foo/a*b/bar, and more than two consecutive stars in a glob path segment are regarded as a single star. Thus, foo/***/bar is equivalent to foo/*/bar. |
? |
Matches any character excluding path separators one time. Does not match path separators or leading dots. |
[abc] |
Matches any characters inside the brackets. For example, [abc] would match the characters a, b or c, and nothing else. |
Picomatch’s matching features and expected results in unit tests are based on Bash’s unit tests and the Bash 4.3 specification, with the following exceptions:
foo/bar/baz with *. Picomatch only matches nested directories with **.!(foo)* should match foo and foobar, since the trailing * bracktracks to match the preceding pattern. This is very memory-inefficient, and IMHO, also incorrect. Picomatch would return false for both foo and foobar.| Pattern | Description |
|---|---|
@(pattern) |
Match only one consecutive occurrence of pattern |
*(pattern) |
Match zero or more consecutive occurrences of pattern |
+(pattern) |
Match one or more consecutive occurrences of pattern |
?(pattern) |
Match zero or one consecutive occurrences of pattern |
!(pattern) |
Match anything but pattern |
Examples
const pm = require('picomatch');
// *(pattern) matches ZERO or more of "pattern"
console.log(pm.isMatch('a', 'a*(z)')); // true
console.log(pm.isMatch('az', 'a*(z)')); // true
console.log(pm.isMatch('azzz', 'a*(z)')); // true
// +(pattern) matches ONE or more of "pattern"
console.log(pm.isMatch('a', 'a*(z)')); // true
console.log(pm.isMatch('az', 'a*(z)')); // true
console.log(pm.isMatch('azzz'